Bridge playing trumpet

After two days in the studio I worked through so many of the conceptual questions that have been bugging me for months. And opened up a stack of new ones.

Basically, I managed to hack my way around the twotone file structure and get my bridge samples into their system, playing as instruments in the data sonification tool.

Brooklyn Bridge plays trumpet Data Sonification TwoTone Example 1

Trumpets now play the Rama VIII Bridge in Bangkok, and the glockenspiel plays the Golden Gate. Problem is, all of these bridge sounds are already so complex, once you start mapping them to different notes in response to the shifts in data, it’s pure sonic chaos! If I had a system that played a sample and shifted the pitch as the data changes, that would be way more seamless. I am enjoying the ad hoc nature of this process though and the way it is forcing me to consider at a much deeper level, the relationship between the data and the sounds.

Golden Gate Bridge Accelerometer Data Sonification TwoTone Bridge Mix Example 2

As imagined, the one to one parameter mapping of sound sample to dataset is not actually that interesting. In terms of compositional complexity – it gets repetitive very quickly. And, extremely dense sonically if I haven’t chosen the initial samples well.

Something one note, simple, not too much going on, without multiple beats or tones.

I have uploaded composition samples, in the process am still navigating how much of this creative experimentation to share and what to keep private for the eventual ‘outcome’. Although as we discussed in the Publishing as Practice workshop today, having ways to show your artistic process can be both liberating and engaging.

Liberating, because it frees you from the grip of perfectionism + as my dear friend Ernest always says: finished is better than perfect! Engaging because while it may pierce the bubble of mystery around your work, it can also make you more approachable. Since this is a project that relies heavily on collaboration, for me it makes sense to make the process as transparent as possible. This allows potential creative partners to dive into the various threads of creative process, and gives a quick overview for anyone interested in working together. It’s also a little alarming, as nothing is ‘finished’ and I don’t feel nearly ready to make it public. Yet here I am, writing for you – whoever you are, dear reader – to lay my artistic soul bare.

There was something else. Ah yes, the constraints of the TwoTone platform mean that I have to take a very ‘zen’ approach to the work. Like the Tibetan Monks I saw in New York City back in 1989, drawing sand mandalas. So intricate and beautiful, painstaking work that they released into the river once it was finished. You can’t stay attached to the outcome if you keep working through the process, over and over again.

Also that there is no ONE definitive work that will come from this. Many variations will emerge. And I am starting to make peace with that as part of the creative process.

I think perhaps I had envisaged – or ensounded? – a massive, global, all the bridges playing together event. But honestly, that is only possible as a conceptual frame. If you take even the 29 sensors on the ONE bridge and try to make a piece out of them, the sonic chaos resulting is going to be almost unbearable to listen to. So I need to find ways to pin it back into a context or reason for listening, and connecting. That is, the bridges have to relate to each other in some way, and to my own practice and experience. Otherwise it becomes totally random. I am starting to find more interesting questions through this process. And dealing with technical issues that I hadn’t even considered – like the sheer volume of data generated by a bridge sensor. And the compatibility or otherwise of the various types of data with each other and the systems I need to use for creating sound compositions.

As an example, I have figured out that the selected storm data from the Hardanger Bridge structural monitoring sensors is only available in mat format but the csv files I need are massive and broken down by hour, throughout the day. So I needed to find out exactly what time did this storm hit? Hurricane Nina seems like a good place to start. Around 2-8pm on a Saturday, 10th January 2015 – now I have attempted to open those csv files but their compression is not playing nice with my computer. It takes another level of engagement now to connect with the engineers and find out if they are interested in the sonification process, and how possible it is to switch formats.

I am charmed to discover that the accelerometers used are made by Canterbury Seismic Instruments, in Christchurch New Zealand, where my mother and grandmother were born. Which makes complete sense, given the magnitude and frequency of earthquakes NZ needs to monitor. Cusp-3 Series Strong Motion Accelerographs.

Technical Specifications PDF – curious, is it possible to convert to audio signal?

I have done this with the B&K accelerometers on the Green Bridge permanent installation in Brisbane, and it only took a simple adapter…

UNDER CONSTRUCTION by Jodi Rose

That brings us up to date, and my decision now to try selecting more subtle bridge samples as a starting point, and find out how they sound using the two datasets I am already working with. Then I need to get my head around the generative composition tools and work on mapping out the structure of the piece for the Church of Our Lady.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s