Algorithmic Music

After two days in the studio I worked through so many of the conceptual questions that have been bugging me for months. And opened up a stack of new ones.

Basically, I managed to hack my way around the twotone file structure and get my bridge samples into their system, playing as instruments in the data sonification tool.

Trumpets now play the Rama VIII Bridge in Bangkok, and the glockenspiel plays the Golden Gate. Problem is, all of these bridge sounds are already so complex, once you start mapping them to different notes in response to the shifts in data, it’s pure sonic chaos! If I had a system that played a sample and shifted the pitch as the data changes, that would be way more seamless. I am enjoying the ad hoc nature of this process though and the way it is forcing me to consider at a much deeper level, the relationship between the data and the sounds.

As imagined, the one to one parameter mapping of sound sample to dataset is not actually that interesting. In terms of compositional complexity – it gets repetitive very quickly. And, extremely dense sonically if I haven’t chosen the initial samples well.

Something one note, simple, not too much going on, without multiple beats or tones.

Eventually I will upload some of these composition samples, but for now am still navigating how much of this process to share and what to keep private for the eventual ‘outcome’. Although as we discussed in the Publishing as Practice workshop today, having ways to show your artistic process can be both liberating and engaging.

Liberating, because it frees you from the grip of perfectionism + as my dear friend Ernest always says: finished is better than perfect! Engaging because while it may pierce the bubble of mystery around your work, it can also make you more approachable. Since this is a project that relies heavily on collaboration, for me it makes sense to make the process as transparent as possible. This allows potential creative partners to dive into the various threads of creative process, and gives a quick overview for anyone interested in working together. It’s also a little alarming, as nothing is ‘finished’ and I don’t feel nearly ready to make it public. Yet here I am, writing for you – whoever you are, dear reader – to lay my artistic soul bare.

There was something else. Ah yes, the constraints of the twotone platform mean that I have to take a very ‘zen’ approach to the work. Like the Tibetan Monks I sawy in New York City back in 1989, drawing sand mandalas. So intricate and beautiful, painstaking work that they released into the river once it was finished. You can’t stay attached to the outcome if you keep working through the process, over and over again.

Also that there is no ONE definitive work that will come from this. Many variations will emerge. And I am starting to make peace with that as part of the creative process.

I think perhaps I had envisaged – or ensounded? – a massive, global, all the bridges playing together event. But honestly, that is only possible as a conceptual frame. If you take even the 29 sensors on the ONE bridge and try to make a piece out of them, the sonic chaos resulting is going to be almost unbearable to listen to. So I need to find ways to pin it back into a context or reason for listening, and connecting. That is, the bridges have to relate to each other in some way, and to my own practice and experience. Otherwise it becomes totally random. I am starting to find more interesting questions through this process. And dealing with technical issues that I hadn’t even considered – like the sheer volume of data generated by a bridge sensor. And the compatability or otherwise of the various types of data with each other and the systems I need to use for creating sound compositions.

As an example, I have figured out that the selected storm data from the Hardanger Bridge structural monitoring sensors is only available in mat format but the csv files I need are massive and broken down by hour, throughout the day. So I needed to find out exactly what time did this storm hit? Hurricane Nina seems like a good place to start. Around 2-8pm on a Saturday, 10th January 2015 – now I have attempted to open those csv files but their compression is not playing nice with my computer. It takes another level of engagement now to connect with the engineers and find out if they are interested in the sonification process, and how possible it is to switch formats.

I am charmed to discover that the accelerometers used are made by Canterbury Seismic Instruments, in Christchurch New Zealand, where my grandmother was born. Which makes complete sense, given the magnitute and frequency of earthquakes NZ needs to monitor. Cusp-3 Series Strong Motion Accelerographs.

Technical Specifications PDF – curious, is it possible to convert to audio signal?

I have done this with the B&K accelerometers on the Green Bridge permanent installation in Brisbane, and it only took a simple adapter…


That brings us up to date, and my decision now to try selecting more subtle bridge samples as a starting point, and find out how they sound using the two datasets I am already working with. Then I need to get my head around the generative composition tools and work on mapping out the structure of the piece for the Church of Our Lady.

Thanks to the generous structural monitoring engineers at NTNU, I have access to an incredible range of accelerometer data from the Hardanger Bridge. It only took one more specific search term, and is published under a creative commons (cc-by) license.

Now the fun really starts – downloading the csv files: LowWind, HighFreq; MaxLift, LowFreq, MaxPitch, HighFreq (which I misread as MaxPatch and thought OMG they have sonified it already! Although perhaps they have. I still need to write and make contact) MaxDrag, LowFreq… The monitoring sensors are in place since 2013, there is seven years of data. And the storms – Storm Nina, Storm Ole, Storm Roar, Storm Thor!

Image Credit: NTNU Department of Structural Engineering, Trondheim

Wind and Acceleration Data from the Hardanger Bridge

By Aksel Fenerci, Knut Andreas Kvåle, Øyvind Wiig Petersen, Anders Rønnquist, Ole Øiseth Published 18-08-2020 at Norges teknisk-naturvitenskapelige universitet 2764 views

The dataset consists of long-term wind and acceleration data collected from the Hardanger Bridge monitoring system. The data are collected through wind sensors (anemometers) and accelerometers that are installed on the bridge. The dataset includes both the raw data (in “.csv” format) and the organized data (with “.mat” extension based on hdf5 format). Downloadable zipped folders contain monthly data with different frequency resolutions, special events (storms, etc.) and the raw data. Details on the organization of the data can be found in the readme file and the data paper, both of which can be found in the dataset.

Resource type: Dataset

Category: Teknologi, Bygningsfag, Konstruksjonsteknologi

Process or method: GPS, Wi-Fi, accelerometers, anemometry, signal processing

Geographical coverage: Hardanger, Norway

Fenerci, A., Kvåle, K. A., Petersen, Ø. W., Rönnquist, A., & Øiseth, O. A. (2020). Wind and Acceleration Data from the Hardanger Bridge.

Datasets and Weather

Ok I’m breaking this down now – the CSV files are by year and month eg. Raw 2015 1.

Storms happen in January, Storm Nina: 10th Jan 2015, Storm Thor: 29th Jan 2016.

So to focus on the storms, go for the first month. I can’t use their smaller already selected and edited mat files in the data sonification tool. Maybe it’s possible to conver mat to csv? (oh that question that opens up a whole new can of worms!)

And have just discovered that my idea works to replace the audio files in the twotone sampler with my own bridge sounds… except that I have to go through the meticulously and make each NOTE as a bridge sound, as they move up and down on the scale while playing the data. I think that’s enough for today. Back to the sensors.

For now I’m taking the full month raw csv files and parsing them by date. You gotta start somewhere – Storm Nina go!

Poetic Storm Nina Video Homage

Storm Norway – Sandvikjo stormen Nina 10.01.2015 – Halsnøy by monica

Ivar Aasen

Mellom bakkar og berg ut med havet
heve nordmannen fenge sin heim,
der han sjølv heve tuftene grave
og sett sjølv sine hus oppå dei.

Han såg ut på dei steinute strender;
der var ingen som der hadde bygd.
«Lat oss rydja og byggja oss grender,
og så eiga me rudningen trygt»

Han såg ut på det bårute havet,
der var ruskut å leggja utpå,
men der leikade fisk nedi kavet,
og den leiken, den ville han sjå.

Fram på vinteren stundom han tenkte:
«Gjev eg var i eit varmare land!»
Men når vårsol i bakkane blenkte,
fekk han hug til si heimlege strand.

Og når liane grønkar som hagar,
når det lavar av blomar på strå,
og når netter er ljose som dagar,
kan han ingen stad venare sjå.

Sud om havet han stundom laut skrida:
Der var rikdom på benkjer og bord,
men ikring såg han trelldomen kvida
og så vende han atter mot nord.

Lat no andre om storleiken kivast,
lat deim bragla med rikdom og høgd,
mellom kaksar eg inkje kan trivast,
mellom jamningar helst er eg nøgd.

Lyd Mellom bakkar og berg ut med havet


Between hills and mountains out to sea raise the Norwegian get his home, where he himself raise the tufts dig and put their houses on top of them. He looked out on the rocky beaches; there was no one who had built there. “Let us clear and build our villages, and then own the rudder safely » He looked out on the stretcher sea, there was debris to lay on, but there were fish playing down the cave, and that toy, he wanted to see. Until the winter he sometimes thought: “I wish I were in a warmer country!” But when the spring sun on the slopes shone, he got the urge to say home-grown beach. And when liane greens like pastures, when it blooms of flowers on straw, and when nights are as bright as days, he can see no city venare. South of the sea he sometimes loudly slid: There was wealth on benches and tables, but around he saw the bondage quiver and then he turned north again. Let no others about the size kivast, let them brag with wealth and heights, between cookies I can not thrive, between jams preferably I am satisfied.

Sound: Between hills and mountains out to sea

Wind-induced response of long-span suspension bridges subjected to span-wise non-uniform winds: a case study

Master thesis – NTNU Norwegian University of Science and Technology. Department of Structural Engineering. [E.M. Forbord & H. Hjellvik]

The response has also been predicted using wind data from the Hardanger Bridge, and the predictions have been compared to the measured response. Uniform profiles of wind speed and turbulence have been given different values based on the measured data, more specifically the mean value of all sensors and the value from the midmost wind sensor. It is seen that the choice of value does not affect the accuracy of response predictions. No matter what values are chosen, the predictions are quite inaccurate in general. lntroducing a non-uniform profile of mean wind speed makes the predictions slightly better in some cases, but not noteworthy, and the accuracy is still relatively low. When also including the non-uniformity of turbulence in the response calculations, the predicted response is reduced and the accuracy worsened with respect to the measured response. Accounting for the non-uniformity of self-excited forces shows almost no effect on the predictions. It is concluded that non-uniform wind profiles do not improve the accuracy of predicted bridge response, and that other uncertainties in the calculation methods have larger impact on the predictions than whether the non-uniform profiles are included or not.

2.1 Random Vibration Theory

6.2 Influence of Non-Uniform Turbulence Standard Deviations

In this section, the influence of span-wise non-uniform turbulence standard deviations on the dynamic response will be presented. Three wind speed profiles have been analysed with different turbulence std profiles. The wind speed profiles used are the linear profile and the sinus profile shown in Figure 5.5a and 5.5d, in addition to a uniform wind speed profile. The three different turbulence std profiles shown in Figure 6.15 are studied. They all have the same integrated sum along the span to make them comparable. The two non-uniform turbulence std profiles chosen have the opposite shapes of the wind speed profiles used in this section, because this is often seen in the measurement data from the Hardanger Bridge. Both of these turbulence std profiles will be compared to uniform turbulence standard deviations, for all the three wind speed profiles. The horizontal turbulence std has a span­ wise mean value of 20% of the wind profile’s mean wind speed, and for the vertical component the corresponding value is 10%. The effect of turbulence std on the response is included in the calculations through the wind spectra, which have a quadratic dependancy on the turbulence std, as shown in Eg. (2.40). The span-wise variation of wind speed is also included in the formula. Therefore, to study the effect of the turbulence std profiles isolated, the response using a uniform wind speed profile and different turbulence std profiles has been calculated. In addition comes the linear and sinus wind profiles, to study if the same turbulence std profiles have different effect on these than on the uniform wind speed profile. The calculated response will only be presented for wind profiles with the mean wind speed of 10 mis, because the trends, the shape and differences of the response along the span are nearly the same for all mean wind speeds for the different wind speed profiles.

6.3 Influence of Non-Uniform Self Excited Forces

To study the influence of span-wise non-uniform self-excited forces on the dynamic response, several wind speed profiles have been numerically tested with both uniform and non-uniform self-excited forces. The non-uniform self-excited forces are caused by the non-uniform wind profile. The re­ sponse is predicted with uniform self-excited forces where the aerodynamic properties are dependent on the mean wind speed of the wind profile, and with non-uniform self-excited forces where the aero­ dynamic properties vary along the span with the wind speed. Toen the bridge response in both cases are compared. The wind profiles tested are presented in Figure 5.5. As in section 6.1, the standard de­ viations of turbulence components are span-wise uniform, such that the influence of the non-uniform self-excited forces are investigated separately. The horizontal and vertical turbulence standard devi­ ations have been set to 20 and 10%, respectively, of the horizontal mean wind speed.

The influence of the non-uniform turbulence standard deviation has connection to the shape of the wind speeds along the span. As discussed previously, the response shifts to where the wind speeds is largest. The same can be said about the turbulence std. It was seen that the wind is dominating and shifts the response more than the turbulence std, for this particular shapes and ratio between the mean wind speed and standard deviation of the turbulence components. The horizontal shift in the response due to the non-uniform turbulence std comes from the cross-spectral densities of the turbulence components which is high when two points with large turbulence std are considered.

The effect of including the non-uniform self-excited forces on the response increase with the mean wind speed of the wind profile. The difference between the response using non-uniform and uniform self-excited forces are largest for the highest mean wind speeds studied. The lateral response using non-uniform self-excited forces deviates less from the response using uniform self-excited forces compared to the vertical and torsional response. This is due to the aerodynamic derivatives which has been taken as zero. The reason for the large ratios in the vertical and torsional direction is the aerodynamic derivatives that reduce the total damping and stiffness of the structure as mentioned. For lower mean wind speeds, 10-20m/s, the difference is below 10% for all response components.

Masters Thesis NTNU 2017 permanent link

I think it’s safe to say they haven’t sonified it… yet!

Here are a few more links open from my research on the Hardanger Bridge

Official Website

General Norway Bridges info

The Neglected Bridges of Norway


Literally every tab I have open on my screen in the quest to figure out how to write generative music… Part of ongoing research.



This presentation is about making music
by designing systems that make music.

For his 1965 composition “It’s Gonna Rain”, Steve Reich designed a generative mechanical system.

It was a system of two tape recorders.

Reich had made field recordings of speech on tape.

He made two short identical loops from his recordings. One for each tape recorder.

The tape recorders were set to slightly different speeds.
One would play a little faster than the other.

This started a phasing process. A continuously changing soundscape was generated from the short loops.

Though I may have the pleasure of discovering musical processes and composing the musical material to run through them, once the process is set up and loaded it runs by itself.

Steve Reich, Music as a Gradual Process, 1968.

On And On And On: A Guide to Generative Electronic Music

By Samuel Tornow · July 19, 2019


Some very useful tutorials (including an ableton live project with all the settings – just need to upload bridge samples) and one slightly concerning automatic music generator which seems to be based on avoiding paying copyright to artists. So not down with that, but curious about what it does. Also the station of endless compositions, quite soothing. Actually, we’re going to start with some random plugins and synths.


Nodal is generative software for composing music, interactive real-time improvisation, and a musical tool for experimentation and play. Nodal uses a new method for creating and exploring musical patterns, probably unlike anything you’ve used before. You can play sounds using Nodal’s built-in synthesiser or any MIDI compatible hardware or software instrument.

Nodal is based around the concept of a user-defined network. The network consists of nodes (musical events) and edges (connections between events). You interactively define the network, which is then automatically traversed by any number of virtual players. Players play their instruments according to the notes specified in each node. The time taken to travel from one node to another is based on the length of the edges that connect the nodes. Nodal allows you to create complex, changing sequences using just a few simple elements. Its unique visual representation allows you to edit and interact with the music generating system as the composition plays.

Nodal is compatible with DAW software such as Ableton Live, Logic Studio, Digital Performer and Garage Band. It can transmit and receive MIDI sync. You can edit a network using the mouse and keyboard, and optionally a MIDI keyboard. Nodal recognises and sends notes, sync, continuous controller and pitch bend information.

Nodal is developed at SensiLab, Monash University.

Melda Production VST Convolution

Modular Synth Sequencer

As used in this tutorial

Generative Modular Music

How to Create Amazing Generative Music using Ableton Live and Max Devices – Tutorial

Fantastic Devices and how to use them.. to create surprising generative jams with hardware or soft synths, inside Ableton Live. I can just listen to this for hours! And it can be your best solution if you need to create a lot of musical content in a short span of time. The inspiration for this Jam/Tutorial came when i started using the Modular Sequencer, by Sound Manufacture. I find it a smart and useful device, as it easily allows to do things that would require tons of other devices, and a lot of time to make them work in Ableton Live. Plus it has its own unique features, a lot of them! And it can also communicate with other Sound Manufacture devices, to considerably expand the range of options. Here you can find the devices shown in this video. Modular Sequencer:…​ Chord-o-mat:…

Here is the Template Project download link, you’ll find the project in the zip file contained in the Album download. Just set price as ZERO and download for free:

Generative Music Techniques With Ableton Live


Another tutorial

Generating Random Music In Ableton Live



Making Generative Music in the Browser

My personal process

Alex Bainter

Alex BainterMar 26, 2019·9 min read

Creative Commons License

For anyone and everyone

Music from is officially licensed under a Creative Commons Attribution 4.0 International License (CC BY 4.0). Generally, you are welcome to use or modify the music for any purpose, including commercially or publicly, provided you give appropriate credit.

It’s as easy as adding something like this to your work:

Or even just:

See “How to give attribution” from Creative Commons. Please pretend to read the official licensing terms before using music from this service.

You can show your appreciation by making a donation.

Other Licensing Arrangements

For special cases and people with fancy pants

If you prefer not to use the Creative Commons licensing, send an email with your offer to





Okay, let’s get this out of the way – no one should describe this as an “AI DJ.” There is no autonomous machine intelligence acting as a DJ. On the contrary, the mushy digital mash-up textures on offer here are unique, fresh, and distinctively sound like something that came from Moisés. Part analysis, part generative artwork, part creative remix, OÍR is simultaneously the intentional work of an artist and a machine reflection of a wide variety of streamed DJ sets.

Technically speaking, says Moisés, “the system is a compendium of OpenAI’s Jukebox, trained from scratch, StyleGAN2 for visuals.” “The mixing and DJ ‘transitions’ are done with a MIR [Music InformatioN Retrieval] ‘automatic mixing’ Python script,” he says.

But it’s worthwhile also understanding his artistic intention:

OÍR stems from my ongoing research on AI, sound synthesis, and electronic music.

Since starting my adventure into Deep Learning systems for music a couple of years ago, I’ve asked myself if Deep Learning (AI) is a tool or a medium?
Right now I’ve come to the conclusion that it can be both, and this is what
exactly I’m trying to explore with this project.

When we talk about Deep Learning as medium, there are three particular
processes engaged when working with generative systems: curation of the data, training and monitoring of the algorithm as it ‘learns,’ and generating new synthetic media. Rinse and repeat.

There are a couple of aspects that interest me from this process. Each time you train the AI algorithm, its weights and biases or what it has ‘learned’ change over time — depending on the data you are having it learn from. The algorithm generates patterns present in these vast amounts of images and music, as is the case of OÍR, and these can be changing as the ‘learning’ process continues.

So this quality of a constantly changing and morphing generative algorithm is exactly what I want to explore with OÍR, and what better way to do it than though electronic dance music and techno culture.

I chose a channel as the canvas for the first episode, or EPOCH, of OÍR with a selection from the archive from HÖR Berlin, because I feel this channel has done the amazing job of generating a collective culture, specifically within techno and electronic music. I wanted to explore which patterns are emerging from this culture – which patterns can be synthesized, both visual and sonic, from all these sets and different approximations of techno, over 1,400+ hours and counting.

My desire with this art project is not to automatize or replace DJ’s or
electronic musicians in any way, but rather have OÍR be a sort of ‘live generative archive’, as I did before with my album 𝕺𝖐𝖆𝖈𝖍𝖎𝖍𝖚𝖆𝖑𝖎 in relation to the Mexican 3ball electronic music genre, of certain cultural moments in electronic music which are increasingly existing on big tech platforms and the internet. By the way, OÍR means “to listen” in Spanish.


These are some of the resources I will be exploring in the course of my research:


AI Music, Algorithmic Composition, Data Sonification, Machine Creativity

  • Tutorials, guides and how-to courses
  • Examples of Algorithmic Composition
  • Projects using Machine learning + creativity / AI Music



Make music and art with machine intelligence

MIMIC is a web platform for the artistic exploration of musical machine learning and machine listening. We have designed this collaborative platform as an interactive online coding environment, engineered to bring new technologies in AI and signal processing to artists, composers, musicians and performers all over the world.

The MIMIC platform has a built-in audio engine, machine learning and machine listening tools that makes it easy for creative coders to get started using these techniques in their own artistic projects. The platform also includes various examples of how to integrate external machine learning systems for sound, music and art making. These examples can be forked and further developed by the users of the platform.

puredata tutorials

Pd Tutorials and HOWTOs?

Here is collection of howtos and tutorials in many different languages covering a number of different topics. The following topics has been suggested to merge in to the below list: basic audio, audio synths, audio filters, video effects, video synths, 3D graphics, interfacing with the physical world (HID, arduino, etc), network programming.

Programming electronic music in pd

by Johannes Kreidler

Algorithmic Composition RealTime Environment

PD library collection

github repository pd-acre


SodaLib: a data sonification framework for creative coding environments Agoston Nagy


In the ever-growing area of data driven interfaces (embedded systems, social activities), it becomes more important to have effective methods to analyze complex data sets, observing them from different perspectives, understanding their features and dimensions, accessing, interpreting and mapping them in meaningful ways. With SodaLib, it is easy to map live data (sensors, web apis, etc), large, prerecorded datasets (tables, logs, excel les), or even unusual sources (images, 3D environments) to recognizable audible patterns through a set of sonifcation methods, including parameter mapping and event based sonification.

SodaLib is an open source, cross platform, multipurpose sonification tool for designers, programmers and creative practitioners.


Algorithmic composer: Sonification

Sonification can be used to hear information in a set of data that might be otherwise difficult to perceive, common examples include Geiger counters, sonar and medical monitoring [ECG]. When creating sonification algorithmic compositions we are interested in creating interesting or aesthetically pleasing sounds and music by mapping non-musical data directly to musical parameters.

Algorithmic CompOSITION // CATARINA

by Mike Moreno

Youtube Video:
Get the recording here:

A 100% synthesized algorithmic composition made using Pure Data Vanilla 0.49-1. Made for the upcoming Muff Wiggler Discord Collective Album.

Instruments and sounds used are part of my library: pd-mkmr


Algorithmic Composition // JOI

by Mike Moreno

puredata patch joi

Youtube Video:
Get the recording here:

An algorithmic composition made using Pure Data Vanilla 0.47-1

The Visuals were made in GEM using primarily the [scopeXYZ~] and [pix_sig2pix~] objects.

I also relied on heavylib (a library of vanilla abstractions):


puredata patch from the algorithmic composer