Composition based on recording the vibrations in the structure of bridges from around the world. A sonic sculpture creating a collective space for reflection as you pass through the bell tower into the Church of our Lady.

Contemplating the bridge from the everyday to eternity.

Signal on the Silver Bridge

Torsdag 29. april, Vår Frue kirke

I klokketårnet på Vår Frue kirke blir det urpremiere på en helt ny lydinstallasjon med lyden av vibrasjoner fra Den australske lydkunstneren Jodi Rose er kjent for sitt broprosjekt Singing Bridges, der hun lager verk basert på vibrasjonene til brovaiere over hele verden. Det nye verket Signal on the Silver Bridge er basert på opptak av vibrasjonene i strukturen til tre norske gangbroer. Rose skaper et særegent sonisk rom i klokketårnet til Vår Frue kirke som antyder broer vi ikke vet hvor fører hen. broer.

Only Connect Trondheim: Come What May

Only Connect Trondheim 2021

Credits

Commissioned by Bjørnar Habbestad for Only Connect NyMusikk Festival, ‘Come What May’’ 2021 Edition in Trondheim. Jodi Rose designed a site-specific installation composed with her archive of global bridges for the Church of our Lady bell tower.

The Global Bridge Symphony is supported by APRA AMCOS Art Music Fund.

Thanks to the Ny Musikk team, NTNU, KiT, Øyvind Brandtsegg, David Rych, Alex Murray-Leslie, Jacob Jessen, Mari Bastashevski, Jordan Sand and Øystein Fjeldbo.

Jodi Rose is an artist, composer and creative director of Singing Bridges, an urban sonic sculpture playing the cables of bridges as musical instruments on a global scale, connecting bridges around the world in a Global Bridge Symphony. Rose is studying Artistic Research (MFA) in Art & Technology at Trondheim Art Academy, Norway.

After two days in the studio I worked through so many of the conceptual questions that have been bugging me for months. And opened up a stack of new ones.

Basically, I managed to hack my way around the twotone file structure and get my bridge samples into their system, playing as instruments in the data sonification tool.

Trumpets now play the Rama VIII Bridge in Bangkok, and the glockenspiel plays the Golden Gate. Problem is, all of these bridge sounds are already so complex, once you start mapping them to different notes in response to the shifts in data, it’s pure sonic chaos! If I had a system that played a sample and shifted the pitch as the data changes, that would be way more seamless. I am enjoying the ad hoc nature of this process though and the way it is forcing me to consider at a much deeper level, the relationship between the data and the sounds.

As imagined, the one to one parameter mapping of sound sample to dataset is not actually that interesting. In terms of compositional complexity – it gets repetitive very quickly. And, extremely dense sonically if I haven’t chosen the initial samples well.

Something one note, simple, not too much going on, without multiple beats or tones.

Eventually I will upload some of these composition samples, but for now am still navigating how much of this process to share and what to keep private for the eventual ‘outcome’. Although as we discussed in the Publishing as Practice workshop today, having ways to show your artistic process can be both liberating and engaging.

Liberating, because it frees you from the grip of perfectionism + as my dear friend Ernest always says: finished is better than perfect! Engaging because while it may pierce the bubble of mystery around your work, it can also make you more approachable. Since this is a project that relies heavily on collaboration, for me it makes sense to make the process as transparent as possible. This allows potential creative partners to dive into the various threads of creative process, and gives a quick overview for anyone interested in working together. It’s also a little alarming, as nothing is ‘finished’ and I don’t feel nearly ready to make it public. Yet here I am, writing for you – whoever you are, dear reader – to lay my artistic soul bare.

There was something else. Ah yes, the constraints of the twotone platform mean that I have to take a very ‘zen’ approach to the work. Like the Tibetan Monks I sawy in New York City back in 1989, drawing sand mandalas. So intricate and beautiful, painstaking work that they released into the river once it was finished. You can’t stay attached to the outcome if you keep working through the process, over and over again.

Also that there is no ONE definitive work that will come from this. Many variations will emerge. And I am starting to make peace with that as part of the creative process.

I think perhaps I had envisaged – or ensounded? – a massive, global, all the bridges playing together event. But honestly, that is only possible as a conceptual frame. If you take even the 29 sensors on the ONE bridge and try to make a piece out of them, the sonic chaos resulting is going to be almost unbearable to listen to. So I need to find ways to pin it back into a context or reason for listening, and connecting. That is, the bridges have to relate to each other in some way, and to my own practice and experience. Otherwise it becomes totally random. I am starting to find more interesting questions through this process. And dealing with technical issues that I hadn’t even considered – like the sheer volume of data generated by a bridge sensor. And the compatability or otherwise of the various types of data with each other and the systems I need to use for creating sound compositions.

As an example, I have figured out that the selected storm data from the Hardanger Bridge structural monitoring sensors is only available in mat format but the csv files I need are massive and broken down by hour, throughout the day. So I needed to find out exactly what time did this storm hit? Hurricane Nina seems like a good place to start. Around 2-8pm on a Saturday, 10th January 2015 – now I have attempted to open those csv files but their compression is not playing nice with my computer. It takes another level of engagement now to connect with the engineers and find out if they are interested in the sonification process, and how possible it is to switch formats.

I am charmed to discover that the accelerometers used are made by Canterbury Seismic Instruments, in Christchurch New Zealand, where my grandmother was born. Which makes complete sense, given the magnitute and frequency of earthquakes NZ needs to monitor. Cusp-3 Series Strong Motion Accelerographs.

Technical Specifications PDF – curious, is it possible to convert to audio signal?

I have done this with the B&K accelerometers on the Green Bridge permanent installation in Brisbane, and it only took a simple adapter…

UNDER CONSTRUCTION by Jodi Rose

That brings us up to date, and my decision now to try selecting more subtle bridge samples as a starting point, and find out how they sound using the two datasets I am already working with. Then I need to get my head around the generative composition tools and work on mapping out the structure of the piece for the Church of Our Lady.

Thanks to the generous structural monitoring engineers at NTNU, I have access to an incredible range of accelerometer data from the Hardanger Bridge. It only took one more specific search term, and is published under a creative commons (cc-by) license.

Now the fun really starts – downloading the csv files: LowWind, HighFreq; MaxLift, LowFreq, MaxPitch, HighFreq (which I misread as MaxPatch and thought OMG they have sonified it already! Although perhaps they have. I still need to write and make contact) MaxDrag, LowFreq… The monitoring sensors are in place since 2013, there is seven years of data. And the storms – Storm Nina, Storm Ole, Storm Roar, Storm Thor!

Image Credit: NTNU Department of Structural Engineering, Trondheim

Wind and Acceleration Data from the Hardanger Bridge

By Aksel Fenerci, Knut Andreas Kvåle, Øyvind Wiig Petersen, Anders Rønnquist, Ole Øisethhttps://doi.org/10.21400/5ng8980s Published 18-08-2020 at Norges teknisk-naturvitenskapelige universitet 2764 views

The dataset consists of long-term wind and acceleration data collected from the Hardanger Bridge monitoring system. The data are collected through wind sensors (anemometers) and accelerometers that are installed on the bridge. The dataset includes both the raw data (in “.csv” format) and the organized data (with “.mat” extension based on hdf5 format). Downloadable zipped folders contain monthly data with different frequency resolutions, special events (storms, etc.) and the raw data. Details on the organization of the data can be found in the readme file and the data paper, both of which can be found in the dataset.

Resource type: Dataset

Category: Teknologi, Bygningsfag, Konstruksjonsteknologi

Process or method: GPS, Wi-Fi, accelerometers, anemometry, signal processing

Geographical coverage: Hardanger, Norway

Fenerci, A., Kvåle, K. A., Petersen, Ø. W., Rönnquist, A., & Øiseth, O. A. (2020). Wind and Acceleration Data from the Hardanger Bridge. https://doi.org/10.21400/5NG8980S

Datasets and Weather

Ok I’m breaking this down now – the CSV files are by year and month eg. Raw 2015 1.

Storms happen in January, Storm Nina: 10th Jan 2015, Storm Thor: 29th Jan 2016.

So to focus on the storms, go for the first month. I can’t use their smaller already selected and edited mat files in the data sonification tool. Maybe it’s possible to conver mat to csv? (oh that question that opens up a whole new can of worms!)

And have just discovered that my idea works to replace the audio files in the twotone sampler with my own bridge sounds… except that I have to go through the meticulously and make each NOTE as a bridge sound, as they move up and down on the scale while playing the data. I think that’s enough for today. Back to the sensors.

For now I’m taking the full month raw csv files and parsing them by date. You gotta start somewhere – Storm Nina go!

Poetic Storm Nina Video Homage

Storm Norway – Sandvikjo stormen Nina 10.01.2015 – Halsnøy by monica

MELLOM BAKKAR OG BERG
Ivar Aasen

Mellom bakkar og berg ut med havet
heve nordmannen fenge sin heim,
der han sjølv heve tuftene grave
og sett sjølv sine hus oppå dei.

Han såg ut på dei steinute strender;
der var ingen som der hadde bygd.
«Lat oss rydja og byggja oss grender,
og så eiga me rudningen trygt»

Han såg ut på det bårute havet,
der var ruskut å leggja utpå,
men der leikade fisk nedi kavet,
og den leiken, den ville han sjå.

Fram på vinteren stundom han tenkte:
«Gjev eg var i eit varmare land!»
Men når vårsol i bakkane blenkte,
fekk han hug til si heimlege strand.

Og når liane grønkar som hagar,
når det lavar av blomar på strå,
og når netter er ljose som dagar,
kan han ingen stad venare sjå.

Sud om havet han stundom laut skrida:
Der var rikdom på benkjer og bord,
men ikring såg han trelldomen kvida
og så vende han atter mot nord.

Lat no andre om storleiken kivast,
lat deim bragla med rikdom og høgd,
mellom kaksar eg inkje kan trivast,
mellom jamningar helst er eg nøgd.

Lyd Mellom bakkar og berg ut med havet

BETWEEN HILLS AND MOUNTAINS Ivar Aasen

Between hills and mountains out to sea raise the Norwegian get his home, where he himself raise the tufts dig and put their houses on top of them. He looked out on the rocky beaches; there was no one who had built there. “Let us clear and build our villages, and then own the rudder safely » He looked out on the stretcher sea, there was debris to lay on, but there were fish playing down the cave, and that toy, he wanted to see. Until the winter he sometimes thought: “I wish I were in a warmer country!” But when the spring sun on the slopes shone, he got the urge to say home-grown beach. And when liane greens like pastures, when it blooms of flowers on straw, and when nights are as bright as days, he can see no city venare. South of the sea he sometimes loudly slid: There was wealth on benches and tables, but around he saw the bondage quiver and then he turned north again. Let no others about the size kivast, let them brag with wealth and heights, between cookies I can not thrive, between jams preferably I am satisfied.

Sound: Between hills and mountains out to sea

Wind-induced response of long-span suspension bridges subjected to span-wise non-uniform winds: a case study

Master thesis – NTNU Norwegian University of Science and Technology. Department of Structural Engineering. [E.M. Forbord & H. Hjellvik]

The response has also been predicted using wind data from the Hardanger Bridge, and the predictions have been compared to the measured response. Uniform profiles of wind speed and turbulence have been given different values based on the measured data, more specifically the mean value of all sensors and the value from the midmost wind sensor. It is seen that the choice of value does not affect the accuracy of response predictions. No matter what values are chosen, the predictions are quite inaccurate in general. lntroducing a non-uniform profile of mean wind speed makes the predictions slightly better in some cases, but not noteworthy, and the accuracy is still relatively low. When also including the non-uniformity of turbulence in the response calculations, the predicted response is reduced and the accuracy worsened with respect to the measured response. Accounting for the non-uniformity of self-excited forces shows almost no effect on the predictions. It is concluded that non-uniform wind profiles do not improve the accuracy of predicted bridge response, and that other uncertainties in the calculation methods have larger impact on the predictions than whether the non-uniform profiles are included or not.

2.1 Random Vibration Theory

6.2 Influence of Non-Uniform Turbulence Standard Deviations

In this section, the influence of span-wise non-uniform turbulence standard deviations on the dynamic response will be presented. Three wind speed profiles have been analysed with different turbulence std profiles. The wind speed profiles used are the linear profile and the sinus profile shown in Figure 5.5a and 5.5d, in addition to a uniform wind speed profile. The three different turbulence std profiles shown in Figure 6.15 are studied. They all have the same integrated sum along the span to make them comparable. The two non-uniform turbulence std profiles chosen have the opposite shapes of the wind speed profiles used in this section, because this is often seen in the measurement data from the Hardanger Bridge. Both of these turbulence std profiles will be compared to uniform turbulence standard deviations, for all the three wind speed profiles. The horizontal turbulence std has a span­ wise mean value of 20% of the wind profile’s mean wind speed, and for the vertical component the corresponding value is 10%. The effect of turbulence std on the response is included in the calculations through the wind spectra, which have a quadratic dependancy on the turbulence std, as shown in Eg. (2.40). The span-wise variation of wind speed is also included in the formula. Therefore, to study the effect of the turbulence std profiles isolated, the response using a uniform wind speed profile and different turbulence std profiles has been calculated. In addition comes the linear and sinus wind profiles, to study if the same turbulence std profiles have different effect on these than on the uniform wind speed profile. The calculated response will only be presented for wind profiles with the mean wind speed of 10 mis, because the trends, the shape and differences of the response along the span are nearly the same for all mean wind speeds for the different wind speed profiles.

6.3 Influence of Non-Uniform Self Excited Forces

To study the influence of span-wise non-uniform self-excited forces on the dynamic response, several wind speed profiles have been numerically tested with both uniform and non-uniform self-excited forces. The non-uniform self-excited forces are caused by the non-uniform wind profile. The re­ sponse is predicted with uniform self-excited forces where the aerodynamic properties are dependent on the mean wind speed of the wind profile, and with non-uniform self-excited forces where the aero­ dynamic properties vary along the span with the wind speed. Toen the bridge response in both cases are compared. The wind profiles tested are presented in Figure 5.5. As in section 6.1, the standard de­ viations of turbulence components are span-wise uniform, such that the influence of the non-uniform self-excited forces are investigated separately. The horizontal and vertical turbulence standard devi­ ations have been set to 20 and 10%, respectively, of the horizontal mean wind speed.

The influence of the non-uniform turbulence standard deviation has connection to the shape of the wind speeds along the span. As discussed previously, the response shifts to where the wind speeds is largest. The same can be said about the turbulence std. It was seen that the wind is dominating and shifts the response more than the turbulence std, for this particular shapes and ratio between the mean wind speed and standard deviation of the turbulence components. The horizontal shift in the response due to the non-uniform turbulence std comes from the cross-spectral densities of the turbulence components which is high when two points with large turbulence std are considered.

The effect of including the non-uniform self-excited forces on the response increase with the mean wind speed of the wind profile. The difference between the response using non-uniform and uniform self-excited forces are largest for the highest mean wind speeds studied. The lateral response using non-uniform self-excited forces deviates less from the response using uniform self-excited forces compared to the vertical and torsional response. This is due to the aerodynamic derivatives which has been taken as zero. The reason for the large ratios in the vertical and torsional direction is the aerodynamic derivatives that reduce the total damping and stiffness of the structure as mentioned. For lower mean wind speeds, 10-20m/s, the difference is below 10% for all response components.

Masters Thesis NTNU 2017 permanent link

I think it’s safe to say they haven’t sonified it… yet!

Here are a few more links open from my research on the Hardanger Bridge

Official Website https://www.vegvesen.no/vegprosjekter/Hardangerbrua/InEnglish/the-hardanger-bridge

General Norway Bridges info https://www.vegvesen.no/en/roads/Roads+and+bridges/Bridges

The Neglected Bridges of Norway

https://www.vg.no/spesial/2017/de-forsomte-broene/kart/index-eng.php

AllplanInfrastructure

Data Sonification toolkit coming together! Today I’m learning about twotone and how to resucitate a dead web audio interface. The wonderful Øystein Fjeldbo comes by to help me navigate this brave data world, and talks me through some of the options I’m exploring to make a proof-of-concept. First up, based on a tutorial in the Brexification post from MCT (Music Commmunicationst Technology) Master’s student blog at NTNU turns out to be not that adapatble. It’s very handy that it comes embedded in Max for Live (Connection Kit) and I get the sense of how easy it could be to use.

I can change the API to another live data stream, but there is no simple way to swap their synthesised sounds for a sample player. It turns out to be a completely different process, applying paramater changes to a tone generated by the patch, rather than making changes to an audio sample. So, we take a look at the patch made by the students, who have adapted it to their own needs – but again, this is too specific and not quite what I want to do. Now I’m a little concerned that I will have to give the process over to a custom build, but when we look into the code for the mysteriously vanishing twotone app, it turns out this is something Øystein can help me rebuild.

What is it for?

TwoTone can be used for understanding data through listening. It makes data more accessible.

Data

TwoTone can be used by itself or in tandem with visualization. Just like in the cinema, sounds add another layer to understanding.

Music

TwoTone is a fun and intuitive way to make your own compositions without any prior musical or technical knowledge.

Google NEWS & DATAVIZED TECHNOLOGIES

Or so the twotone web audio app promises – sadly their domain is no longer active. Even though it was launched with great fanfare on the 5th March in 2019. Only two years on, and it’s already obsolete. “TwoTone is imagined and made by Datavized Technologies with support from Google News Initiative. We hope you like it.” Did Data Sonification fall out of fashion so fast? Luckily it’s open source, and thanks to GitHub, the code still exists. But I have no idea what to do with it, or where to start.

https://twotone.io/ introduction

Where did it go? Technology graveyard from two years ago. Google labs and all.

The talented Øystein, who teaches Data Sonification manages to get this zombie code going… from the graveyard of twotone.io https://github.com/datavized/twotone

It’s such a beautiful and simple concept, a data sonification web-based audio tool. You simply upload your dataset and then can choose instrumentation, key, duration, etc. Once we have it running on my browser, I start working my way through the tutorials.

TWOTONE TUTORIALS

Introduction: https://twotone.io/tutorials/introduction-to-twotone/

Advanced: https://twotone.io/tutorials/advanced-features-tutorial/

NODEJS

Nodejs – another thing we had to install to get twotone working!

https://nodejs.org/en/

https://nodejs.org/en/about/

I really have no idea what any of this means, but I trust that Øystein knows!

Despite not being able to swap out my own sounds with the pre-made samples, it’s satisfying working through the options to make interesting sonification compositions.

I’m pretty happy to be able to add my own csv data sets – a couple of examples from my research. Golden Gate Bridge accelerometers recorded on mobile phone by Stephen Wylie, via kaggle https://www.kaggle.com/mrcity/golden-gate-accel-20180512 This is one minute of data from the “Linear Accelerometer” of the Physics Toolbox Suite v1.8.6 for Android. The data was collected from a Pixel 2 phone on the east side of the Golden Gate Bridge at the midpoint between the two towers of the bridge at approximately 3:20 PM local time on May 12, 2018. CCO Public Domain.

and the composition looks like this…

adding sound later – the export is doing a glitchy thing where it only gives a minutes of the sound, not all 30 minutes… But it’s only a problem with this one, not my next attempt using pedestrian data from the Brooklyn Bridge.

NEW YORK CITY OPEN DATASETS

Transporation: Brooklyn Bridge Pedestrian Count

Towards Manhattan / Towards Brooklyn. Weather summary, precipitation:

Now it gets fun to start playing with filters: defining the key, speed and instruments<

I’ve figured out a hack to get my own samples in there – just have to name them exactly as the existing sets are named. Going to work on that tomorrow.

There’s a range of durations, from 02 seconds to 14 seconds, and I think the actual notes can be sacrificed for a variety of sounds. This is where it starts to get really creative! And sounds like some wild free jazz. Now I need to do some more study in order to really get a sense of the possibilities of sonification. Here’s the lecture series by Thomas Hermann – one of the people who literally wrote the book on sonification.

SONIFICATION AND SOUND DESIGN, MCT 2019

MCT Data Sonification course taught by Thomas Hermann: techniques beyond parameter mapping for applications in data mining and bio feedback

MCT4046 Sonification and Sound Design Seminar Series – Spring 2019
Speaker: Thomas Hermann

Facilitator: Shreejay Shrestha
Video production: MCT students 2018-2020
Video recording: Shreejay Shrestha, Guy Sion
Audio recording: Jørgen Nygård Varpe
A/V editing & processing: Sepehr Haghighi
Design: Karolina Jawad
Music: Sepehr Haghighi
Technical support: Robin Støckert, Anders Tveit, Alexander Refsum Jensenius
Administration: Maj Vester Larsen
Consultant: Sigurd Saue
Seminar Series Curation: Anna Xambó
Recorded in NTNU Trondheim 2019

ARNE NORDHEIM

Norwegian composer Arne Nordheim, recommended by Øystein to listen for generative compositions and experimental music techniques.

Hardanger Bridge monitoring

Next step: contact NTNU engineers and ask for access to their data, pretty please?!

NTNU Case study: The Hardanger Bridge

The Hardanger Bridge is a 1380 meters long suspension bridge, located on the western coast of Norway. It is the longest suspension bridge in Norway, and the 10th longest in the world, which makes it an interesting case study.  Our research in the field of suspension bridges requires knowledge from several engineering fields; such as aerodynamics, signal processing, finite element analysis and control theory. 

Experimental surveys

A comprehensive measurement system is operating on the Hardanger Bridge to improve the current understanding of the dynamic behaviour of long-span suspension bridges and their interaction with wind. This includes sensors for measurement of both response and environmental excitation. The system is described in detail under Structural monitoring (Structural monitoring – The Hardanger Bridge).

Source: https://www.ntnu.edu/kt/research/dynamics/monitoring/hardanger

Figure 2: Illustration of the front view of the Hardanger Bridge, showing accelerometer and anemometer positions. Illustration by NTNU/Heidi Kvåle.

The Hardanger Bridge monitoring project

The Hardanger Bridge, opened in 2013, is a 1380 m long suspension bridge crossing the Hardanger fjord in western Norway. The main span is 1310 m long, which makes it the longest suspension bridge in Norway and the 10th longest in the world. The two concrete bridge towers are 200 m high, made using the slip forming technique, and supports the steel bridge casing and the cables that are anchored in the mountain side.

The main objective with the monitoring project is to study the dynamic behaviour of the suspension bridge; especially the wind-induced vibrations. All data generated from the extensive measuring system is directly used in research related to the ferry-free costal highway E39 project; both PhD and master theses. 

The monitoring system consists of the following sensors (illustrated in Figure 2):

  • 20 triaxial accelerometers
  • 9 anemometers

System identification and modal analysis

Based on recordings established by the monitoring system, parameters characterizing the system behaviour; typically represented by natural frequencies, damping ratios, and mode shapes; are estimated (system identification). The results are a highly valuable asset for these applications:

  • Studying the dynamic behaviour of the bridge
  • Updating the numerical model, such that it better describes the real behaviour of the bridge
  • Verification and possible improvement of the current state-of-the-art methods used for numerical modelling
Modal shape of the Hardanger Bridge. Model and animation by NTNU/Øyvind Wiig Petersen.

Load modelling and identification

Modelling of the wind-induced forces on suspension bridges is crucially important for accurate prediction of the dynamic response.

The modelling of the environmental wind loads hinges on the description of the spatial and temporal characteristics of the wind field. The wind data from the field measurements will be used to characterize the wind field. It is aimed to test the performance of load models and reveal the uncertainty involved in response prediction.

The models for motion-induced loads most commonly used in bridge aerodynamics are linear engineering approximations. It has been shown in several case studies that the models are working well when the response of the bridge is dominated by one vibration mode in each direction. Taking into account that the principle of superposition does not hold in fluid dynamics, it is unknown if the models will be able to predict reliable results for a more complex motion. Thus, there is a need to test the accuracy of the linear assumption introduced in the modelling of the self-excited forces. This challenging task does not only require development of new experimental setup but also identification techniques able to work with an arbitrary motion.

People:

Postdocs and PhD candidates working with suspension bridges:

SOURCE: https://www.ntnu.edu/kt/research/dynamics/research/long-span/suspension-bridges

Nice Rain

performance and discussion by Gilles Aubry

May 10 2013, 19h
Errant Bodies project space
Kollwitzstrasse 97, 10435, Berlin.

The project Nice Rain takes the city of Berlin as a pretext for exploring the notion of urban sound document from the perspective of a diversity of recording practices and intentions. Confronted with the difficulties to discover new strong narratives by simply listening to the Berlin public space, I’ve decided to explore instead the narratives embedded within already existing audio documents related to the city. As an alternative to a classic city soundscape like some of my previous works, I will present a collection of Berlin audio recordings from the personal archive of various practitioners who I know personally, some living in Berlin and some elsewhere.

While all of the recordings will be referring to Berlin as their location, each one will represent a specific recording practice corresponding to an intention more or less defined by its author. The live mixing of these files over multiple loudspeakers will thus be an attempt to create something like a soundscape of (documentary) intentions, while at the same time generating an arbitrary sound travel through various public locations of the city. One aim of the project is also to discuss and re-situate the community of field recording practices within a field of intentionality and reflexivity, as possibly opposed to a logic of place.

With announced text and audio contributions by:

Rinus van Alebeek, Mario Asef, Boris Baltschun & Serge Baghdassarians, Alessandro Bosetti, Rob Curgenven, Peter Cusack, Anke Eckardt, Christina Ertl Shirley, Helena Gough, Andy Graydon, Ezgi Kilincaslan, Achim Langerer, Felicity Magan, Israel Martinez, Anders Lauge Meldgaard, Valeria Merlini, Udo Noll, Dave Philips, Stephan Roigk, Jodi Rose, Fritz Schlüter, Tapeman (Helge Neidhardt), Valerio Tricoli, Antje Vowinckel and Kathrin Wildner.

Recording the demolition of the Palast der Republik, Berlin 2008
Recording the demolition of the Palast der Republik, Berlin 2008

(picture by Jodi Rose, 2008)

*I am very happy to be invited to participate in this event, enjoying the discussion via email with Gilles around notions of intention & authorship, composition and place. JR

Gilles Aubry is a Swiss sound artist living in Berlin since 2002. Trained initially as a sax player and composer, he graduated in 2010 as a Master student in Sound Studies at the University of the Arts (UDK) in Berlin. His artistic practice is based on an auditory approach of the real informed by researches on cultural and historical aspects of sound production and reception. Combining ethnography, critical discourse and formal experiments, Aubry creates installations, performances, compositions, audio essays and radio plays. His sonic images (phonographies) of more or less identified situations stand as an attempt to challenge problematic aspects of visual representation.

Nice Rain is part of THIS IS THE END, a research project curated by Marta Ferretti and Gaia Martino about the relationship between public space and narration in the specific context of Berlin. THIS IS THE END is hosted from April 15th to May 12th at Errant Bodies project space, Berlin.

MuseRuole – women in experimental music

ON THE AIR: CALL FOR SOUND PIECES

Deadline 17th May 2013

MuseRuole is a festival dedicated to experimental music and puts contemporary female musicians and performers at its centre. It is a journey of discovery into the wide world of current musical research, looking for specifically female modes of expression, with its own grammar and personal style. This year (2013) we focus on the theme of radio as a space for musical research and a medium of technological innovation, including  notions of sending and receiving in equal measure.

museruolecfp
Even today radio is an important method of communication: it remains one of the most powerful, cheapest, accessible and world spanning of media. Despite its long history and being one of the first mass media of the last century, radio has always been able to renew itself, adapting to changing realities and remaining in step with the future. And so radio is an instrument of disseminating knowledge as well as a research space for new technologies and formats: as a global platform it offers the possibility to develop innovative networks and ways to connect.

MuseRuole – Radio Edition invites female artists, composers and musicians to participate in the ‘virtual jukebox on the air”. To submit, please send us a short composition, fieldrecording or excerpt of a longer work (maximum length: 5 minutes). The selected works will become part of this year’s edition of MuseRuole and will be introduced at Museion – Passage (Museum of modern and contemporary art, Bozen – IT) on 5th June 2013. They will also me made available for download via soundcloud. In addition, they will be played via Radia, an international network of 24 independent stations, on national and local radio stations. Last but not least from 7th-30t June 2013 they will be part of a listening station at the Women’s Museum in Meran, Italy.

Submission requirements:

– no application fees
– submission of stereo audio files in .wav or .aif format only
– the maximum length is 5 minutes
– please send the file using ‘WeTransfer’ or other file-sharing at the email address: MuseRuole@yahoo.it
– as part of the project your file will be available for downloadable from Soundcloud (Creative Commons license)
Please include in your submission:

– a short description of the piece (max 100 words), duration and year of creation
– a short biography (max 100 words)
– if possible, a web-link

Deadline: May the 17th 2013

Artistic director: Valeria Merlini

The project is organized together with Assessorato alla Cultura e alla Convivenza, Comune Bolzano.

In collaboration with Museia, Transart, Reboot fm, Radia.

Andrey Smirnov, Moscow, 2011

Graphical (drawn) Sound is a technology for synthesizing sound from light that was invented in Soviet Russia in 1929 as a consequence of the newly invented sound-on-film technology which made possible access to the sound as a trace in a form that could be studied and manipulatedIt also opened up the way for a systematic analysis of these traces such that they could be used to produce any sound at will…

The first practical sound-on-film systems were created almost simultaneously in the USSR, USA and Germany. In Soviet Russia Pavel Tager initiated developments in 1926 in Moscow. Just a few months later in 1927, Alexander Shorin started his research in Leningrad.

avra30

Tager’s system, the Tagephon, was based on intensive variable density optical recording on film while in Shorin’s Kinap system the method of transversal variable area optical recording on film was realized.

By 1936 there were several main, relatively comparable trends of Graphical Sound in Russia:
– Hand-drawn Ornamental Sound, achieved by means of shooting still images of drawn sound waves on an animation stand, with final soundtracks produced in a transversal form (Arseny Avraamov, early Boris Yankovsky);
– Hand-made Paper Sound with final transversal soundtracks (Nikolai Voinov);
– The Variophone or Automated Paper Sound with soundtracks in both transversal and intensive form (Evgeny Sholpo, Georgy Rimsky-Korsakov);
– The Syntones method, based on the idea of spectral analysis, decomposition and resynthesis, developed in 1932-1935 by a pupil of Arseny Avraamov, the young painter and acoustician Boris Yankovsky.

At exactly the same time very similar efforts were being undertaken in Germany by Rudolf Pfenninger in Munich and, somewhat later, by the animator and filmmaker Oskar Fischinger in Berlin. Among the researchers working with Graphical Sound after World War II were the famous filmmaker Norman McLaren (Canada) and the composer and inventor Daphne Oram (UK).

Ornamental Sound

The ornamental sound technique, developed in 1929-1930 by Arseny Avraamov, was similar to German animator and filmmaker Oscar Fischinger’s sounding ornaments first presented in 1932. In 1930, however, Avraamov was the first to demonstrate experimental sound pieces – based on geometric profiles and ornaments – produced purely through drawing methods. This was achieved by means of shooting still images of drawn sound waves on an animation stand.

ornamental_300

In December 1930 Mikhail Tsekhanovsky wrote in his article About the Drawn Sound Film: “with the invention of new drawn sound techniques (developed by Arseny Avraamov in Moscow, Sholpo and [Georgy] Rimsky-Korsakov in Leningrad) we are achieving a real possibility of gaining a new level of perfection: both sound and the visual canvas will be developing completely in parallel from the first to the last frame […] Thus the drawn sound film is a new artistic trend in which for the first time in our history music and art meet each other.” [2]

In autumn 1930 Avraamov founded the Multzvuk group at Mosfilm Productions Company in Moscow. To produce his first drawn ornamental sound tracks he had on staff a special draughtsman, cameraman Nikolai Zhelynsky, animator Nikolai Voinov and acoustician Boris Yankovsky who was responsible for the translation of musical scores into Avraamov’s microtonal Welttonsystem as well as Samoilov’s Ober-Unter-Tone Harmony system. The final scores were coded in Yankovsky’s 72- step ultrachromatic scale with the dynamics and speed variations indicated by the number of  frames. Yankovsky was also involved in the production of

acoustic experimental studies, developing methods for the synthesis of sounds with glissando, timbre crossfades, timbre variations and polyphony by means of multiple shooting on the same optical soundtrack (alternative to multi-track recording which was not available yet).

From 1930-34 more than 2000 meters of sound track were produced by Avraamov’s Multzvuk group, including the experimental films Ornamental Animation, Marusia Otravilas, Chinese Tune, Organ Cords, Untertonikum, Prelude, Piruet, Staccato Studies, Dancing Etude and Flute Study. In autumn 1931 the Multzvuk group moved to NIKFI (Scientific Research Institute for Cinema and Photography) and was renamed Syntonfilm Laboratory. In December 1932 NIKFI stopped supporting Syntonfilm and the group moved to Mezhrabpomfilm where in 1934 it was closed as it was unable to justify itself economically. The whole archive was kept for many years at Avraamov’s apartment, where in 1936-37, during Avraamov’s trip to the Caucasus, it was burned by his own sons, making rockets and smoke screens with the old nitro-film tapes, which were highly flammable.

Because of the cross-disciplinary nature of the new technique, people involved in it had to be skilled not only in music, but in acoustics, mathematics, sound-on-film technology and engineering. As a result even skilled journalists often could not understand the physical meaning of the phenomena under consideration or specific technological ideas. Having no developed terminology, many mistakes and unexpected “puzzles” appeared in their writings. Moreover, there were several known research groups – competitors in Russia and Germany working in parallel. It led to a very specific problem – encryption of the information. For instance, in the well-known photograph Oscar Fischinger holds ‘fake’ rolls made by his Studio for publicity purposes as he did not want his competitors to learn his actual techniques. He never used rolls as large as this – they were fakes. [3] Yankovsky had a very special way of making notes on his ideas. It is impossible to understand the construction of his tools from reading one description without referring to several other manuscripts that offer important keys for understanding it.

Syntones and Audio Computing

In 1931-32 Boris Yankovsky (1904-1973) was on the staff of the Multzvuk group. In 1932, however, disappointed with its Ornamental Sound approach, he left the group. Unlike most of his colleagues he understood that the waveform does not represent the tone colour uniformly and that only the spectrum of sound developed in time with all the nuances of its temporal transitions can give a complete picture. Of all the early graphical sound pioneers, Yankovsky alone pursued the approach of spectral analysis, decomposition and re-synthesis. His concept was based on the belief that it is possible to develop a universal library of sounds similar to Mendeleev’s table of chemical elements. His curves were spectral templates, semiotic entities that could be combined to produce sound hybrids. As an option he developed several sound processing techniques including pitch shifting and time stretching based on the separation of spectral content and formants, resembling recent computer music techniques of cross synthesis and the phase vocoder. To realize these ideas he invented a special instrument, the Vibroexponator – the most paradigm-shifting proposition of the mid-1930s.

In 1935 in one of his manuscripts Yankovsky wrote: ‘It is important now to conquer and increase the smoothness of tone colours, flowing rainbows of spectral colours in sound, instead of monotonous colouring of stationary sounding fixed geometric figures [wave shapes], although the nature of these phenomena is not yet clear. The premises leading to the expansion of these phenomena – life inside the sound spectrum – give us the nature of the musical instruments themselves, but “nature is the best mentor” (Leonardo da Vinci) […] The new technology is moving towards the trends of musical renovation, helping us to define new ways for the Art of Music. This new technology is able to help liberate us from the cacophony of the well-tempered scale and related noises. Its name is Electro-Acoustics and it is the basis for Electro-Music and Graphical Sound’.

vibroexp

Read the full text by Andrey Smirnov here – and more in his book with Jeremy Deller, Sound in Z Experiments in sound and electronic music in early 20th-century Russia, published May 2013:

“Russia, 1917 – inspired by revolutionary ideas, artists and enthusiasts developed innumerable musical inventions, instruments and ideas often long ahead of their time – a culture that was to be cut off in its prime as it collided with the totalitarian state of the 1930s.

Andrey Smirnov’s account of the period offers an engaging introduction to some of the key figures and their work, including Arseny Avraamov’s open-air performance of 1922 featuring the Caspian flotilla, artillery guns, hydroplanes and all the town’s factory sirens, and Alexei Gastev, the polymath who coined the term ‘bio-mechanics’.

Shedding new light on better-known figures such as Leon Theremin (inventor of the world’s first electronic musical instrument, the Theremin), the publication also investigates the work of a number of pioneers of electronic sound tracks using ‘graphical sound’ techniques.

Sound in Z documents an extraordinary and largely forgotten chapter in the history of music and audio technology.” 

Chocolate Vinyl: Music you can eat!

Julia Drouhin at Sonic Protest festival La Générale, Paris. April 12th, 2013

Playing two of my favourite things together: chocolate and music at the wonderful venue in an old power station La Générale Nord-Est, Coopérative artistique, politique et sociale. “An invocation in chocolate of dead singers, whose voices are inspired by a recording of Clair de Lune 1860.”

14, av. Parmentier Paris XIe – M° Voltaire.

Julia et ses disques en chocolat, Paris la Générale, Sonic Protest 14apr2013 from NO MORE RETURN on Vimeo.

DISCO GHOST, Julia Drouhin (création 2013 pour MoMa / MOFO / MONA, Hobart, Tasmanie). Une invocation chocolatière de chanteurs morts, dont les voix sont inspirées d’un enregistrement phonautographique d’Au Clair de la lune de 1860. Écoute la galette, partage l’ex-voto et digère la musique.

Disco Ghost - Julia Drouhin at Sonic Protest, la Générale, Paris 2013
Disco Ghost – Julia Drouhin at Sonic Protest, la Générale, Paris 2013

Sonic Protest Festival

Sonic Protest, 9ème édition, du 11 au 21 avril 2013 !

Cette année, le festival propose sur 10 jours plus de 50 concerts, dans 11 villes en France, Suisse et Belgique, des expositions et 4 évènements en entrée libre. Un panorama subjectif et non exhaustif des musiques expérimentales et des pratiques plastiques qui y sont associées.

Premiers concerts en France pour : THE DEAD C, TORTURING NURSE, URSULA BOGNER, PALAIS SCHAUMBURG, MECANATION (Pierre BASTIEN & ONE MAN NATION), Marc HURTADO with VOMIR et GRAVETEMPLE !

Au programme aussi : une performance inédite de The RED KRAYOLA en exclusivité européenne, plusieurs projets avec Thierry MADIOT, une collaboration CHEVEU & Xavier KLAINE (Winter Family), et une quinzaine d’autres concerts hors-normes parmi lesquels CUT HANDS, LES REINES PROCHAINES, Frédéric LE JUNTER, COMPUTER PIPA (Kink Gong et Li Daiguo), René ZOSSO & Anne OSNOWYCZ, Flo KAUFMANN ou MICRO_PENIS.

En première partie du festival, Sonic Protest propose une série d’expositions et d’installations, en accès libre, avec : les dessins de Nick BLINKO de Rudimentary Peni (dans le cadre de “HEY! modern art & pop culture exhibition Part II”) du 11 avril au 18 mai et du 12 au 15 avril le festival investit la Générale du Nord-Est avec 3 installations sonores et/ou vidéo : CORPUS (Art of Failure), YOU ARE THE LISTENER (Thierry Madiot) et CCRASH TV (Jérôme Fino & Yann Leguay).

Billetterie en ligne https://fr.yesgolive.com/sonic-protest
Pass jour (de 10 à 15 euros) et Pass festival (60 euros : 6 soirées du 16 au 21 avril + compilation double CD).

PARIS

VENDREDI 12 AVRIL
LA GÉNÉRALE DU NORD-EST
14, Av. Parmentier, 75011 Paris
Ouverture des portes : 18h30
Entrée libre

Vernissages
CORPUS
Art of Failure

FRANCE

YOU ARE THE LISTENER
Thierry Madiot

Jusqu’au 15 avril
FRANCE

CCRASH TV
Jérôme Fino, Yann Leguay

FRANCE

Concerts
Andy Guhl
SUISSE

Nicolas Maigret
FRANCE

SAMEDI 13
DIMANCHE 14 AVRIL
LA GAîTÉ LYRIQUE
3bis, Rue Papin, 75003 Paris
Ouverture des portes : 14H30
Entrée libre

PHONOSCOPIE
Thierry Madiot, Yanik Miossec

FRANCE
Réservation conseillée !

LUNDI 15 AVRIL
LA GÉNÉRALE DU NORD-EST
14, Av. Parmentier, 75011 Paris
Ouverture des portes : 18h30
Entrée libre

Séance d’écoute, conférence
et projections :
KINK GONG
FRANCE

Concert
Li Dai Guo
CHINE

“In 2012, equipped with sound recording devices, speakers, percussion and cameras, co-producer, Jonathan Uliel Saldhana, camera man, José Roseira, percussionist, Gustavo Costa, and Raz Mesinai, go deep inside one of the oldest underground mines in all of Europe, dating back thousands of years. After descending hundreds of feet beneath the earth, they begin to encounter paranormal sounds, which they spend weeks recording, finding themselves lost in a labyrinth, surrounded by mysterious sounds and no way out.

Incorporating footage and sound recordings from his underground expeditions, Raz Mesinai blurs the line between reality and fiction, creating a dream like narrative of darkness at its darkest.”

Tunnel Vision watch trailer

Written & directed by Raz Mesinai
Produced by Raz Mesinai & Jonathan Uliel Saldanha
Music by Jonathan Uliel Saldanha

tunnel-vision

Sounding the Depths

“As a composer in the post-dub era, Jerusalem-born and New York-raised Raz Mesinai has spent the past 25 years burrowing under the surface realms of genre and song format to find a reverberant sonic space of his own. With Tunnel Vision (Tzadik), his filmmaking debut, Mesinai takes that burrowing to another level by tying together the praxes of tunneling, sound composition and non-linear narrative.

Tunnel Vision centers around an amorphous group of three to five people finding and exploring one of the many ancient mines under the city of Porto in Portugal. The film is skillfully directed and edited by Mesinai—who many would know for his work in underground electronic music as Sub Dub and Badawi—and scored by fellow experimental-dub artist Jonathan Uliel Saldanha, who among other projects fronts the Porto-based “voodoo-dub” outfit HYY & the Macumbas.

Mesinai establishes the subversive nature of tunneling early in the film. As the director explains his fascination with the underground via voiceover, a character designated Geo-Phone Bob—headphoned, suited in orange and twiddling knobs on a box—performs a cryptic, surveyor-style metering ritual on a street corner at night that draws the scrutiny of a security guard watching him from 20 feet away. Clearly, even preparing for the act of going underground is subject to monitoring on the surface…

Saldanha’s abstract score is performed by Mesinai and a crew of 9 others including the composer himself. Saldanha blends bass, percussion, winds, electronics and some intense wailing by Jessika Kenney and Catarina Miranda to create a ritualized atmosphere to the proceedings that proves both hypnotizing and extremely effective.

Related to this, Tunnel Vision extends a narrative format that Mesinai calls “Dub Fiction” and describes as “a form of storytelling utilizing various mediums of modern technology to create elastic narratives which can be manipulated and, essentially, remixed by others.” In short, storylines and plots can resemble sound signals that can be routed through and effected by various filters. A storyline can decay or become reverberant, and aspects of it can echo or face other effects of repetition.”

read the full article here on Souciant

by Ron Nachmann on Apr 8, 2013

Souciant is a magazine of politics and culture. Or culture and politics. It all depends on your starting point.

Published daily, Souciant was founded in 2010 by a group of longtime friends split between the United States and Europe. The offspring of veterans of such iconographic American independent media sources as Chicago’s Punk Planet magazine and Seattle’s Sub Pop records, Yahoo! and the BBC, Souciant is a platform for criticism and creativity in an era of global community.

For Souciant, all media is equal. Souciant publishes everything. That is, everything that conforms to our editorial mandate.

‘Souciant’ means we care.

Open Call

Association of Multimedia Artists AUROPOLIS and NOFM 2 [ARTSYNC] web radio invite performers from the fields of electro-acoustics, experimental electronics, voice experiment, sound art, improve, sound objects and field recording to participate in international artistic experiment.

Imaginary Orchestra is a metaphor for the outcome of the project’s concept, gathering sound material from all parts of the world through hyper space pool – whether materials are recorded for this purpose or are excerpts from home sessions, previously recorded or released sounds or live performing archive – with the aim to publish one or more digital releases per year composed and produced by sound artist in residence chosen to research the gathered archive and create unique works.

The core concept of IO is to overcome all physical barriers and funding obstacles that are disabling mobility of culture and disheartening collaborations with artists out of reach, weather it is a question of geographical distance or difference in the career phase [established – non established equally treated within the same project].

The IO project is set to become a permanent self-organized platform for investigating new paradigm of audible communications and artistic collaborations through the web.

Send your contributions, works or raw materials to culturalgerila@gmail.com [for files under 20MB only!] or request invitation for IO SugarSync or DropBox folders. Please indicate in the mail subject – For IO, tag your files the way you want them to be signed, send link or basic info about your activities.

For the 1st digital release to be published in February 2014 we collect material until 1st October 2013, all files that we receive after this date will be considered for the following releases.

AUROPOLIS and NOFM 2 [ARTSYNC] Belgrade, Serbia.

imaginary_orchestra

Dear friends,

Please find enclosed an open call I hope would interest some of you. It is a collective experiment – en effort in building a platform dedicated to collaborations focused on experimentation with form, web based sound exchange and performing – a cross-border cooperation stretching limits.

We invite performers, sound artists, electronic musicians, field recording artists and composers to join in this spontaneous initiative that might make a difference and boost visibility of artistic sound related arts.

At the very beginning of this production, as a final outcomes we set:
– annual digital release with collections, remixes and multichannel installations
– exhibition in Belgrade
– web streaming sessions and individual live acts
– full dedication and presentation in dedicated slots through NOFM2[ARTSYNC] 24/7 web streaming channel.

Hope you’ll join us as well and spread the word!

NOTE: If you don’t want to join Imaginary Orchestra but would like to be presented at the ArtSync web radio, please feel free to send your releases on this same e mail address.

Best regards and lots of courage for your art productions,

Manja

auropolis.org
nofmrs
supernovapoetry.net
g12hub.com