I’ve started digging through the tutorials saved in earlier posts and found the first two not that useful. The Generative Jam Template Belibat uses relies heavily on existing modular synths, and while it gives some helpful process tips in terms of the kinds of modulations one might choose to make to a note, it’s not that relevant to playing bridge samples and modifying them. Then checked out the generative.fm tutorial on web audio, which is far more helpful and leads me to the next phase, although I diverged fairly quickly from his process, because again it’st about modulating the pitch and duration of existing notes.
What I’m trying to do is bend and stretch, shift pitch and duration of an existing audio sample, that can be already a fairly complex sonic landscape, not only a single note.
This systems music is by far the most exciting how-to I’ve found yet + and horrifying though the prospect is, it looks like I am going to have to go back to that mimic future learn course and the other one I signed up for but didn’t get all the way through and knuckle down to learn JavaScript, so I can make sense of these things. Damn!
Learning Web Audio by Recreating The Works of Steve Reich and Brian Eno
In this guide we’ll explore some of the history of systems music and the possibilities of making musical systems with Web Audio and JavaScript. We’ll pay homage to three seminal systems pieces by examining and attempting to recreate them: “It’s Gonna Rain” by Steve Reich, “Discreet Music” by Brian Eno, and “Ambient 1: Music for Airports“, also by Brian Eno.
After two days in the studio I worked through so many of the conceptual questions that have been bugging me for months. And opened up a stack of new ones.
Basically, I managed to hack my way around the twotone file structure and get my bridge samples into their system, playing as instruments in the data sonification tool.
Brooklyn Bridge plays trumpet Data Sonification TwoTone Example 1
Sonification process: using audacity to resample original bridge recordings
Trumpets now play the Rama VIII Bridge in Bangkok, and the glockenspiel plays the Golden Gate. Problem is, all of these bridge sounds are already so complex, once you start mapping them to different notes in response to the shifts in data, it’s pure sonic chaos! If I had a system that played a sample and shifted the pitch as the data changes, that would be way more seamless. I am enjoying the ad hoc nature of this process though and the way it is forcing me to consider at a much deeper level, the relationship between the data and the sounds.
Golden Gate Bridge Accelerometer Data Sonification TwoTone Bridge Mix Example 2
TwoTone web audio sonification tool using bridge sounds to play trumpet
As imagined, the one to one parameter mapping of sound sample to dataset is not actually that interesting. In terms of compositional complexity – it gets repetitive very quickly. And, extremely dense sonically if I haven’t chosen the initial samples well.
Golden Gate Bridge samples preparation for sonification
Something one note, simple, not too much going on, without multiple beats or tones.
Trumpet and Glockenspiel now play bridges in twotone
I have uploaded composition samples, in the process am still navigating how much of this creative experimentation to share and what to keep private for the eventual ‘outcome’. Although as we discussed in the Publishing as Practice workshop today, having ways to show your artistic process can be both liberating and engaging.
Liberating, because it frees you from the grip of perfectionism + as my dear friend Ernest always says: finished is better than perfect! Engaging because while it may pierce the bubble of mystery around your work, it can also make you more approachable. Since this is a project that relies heavily on collaboration, for me it makes sense to make the process as transparent as possible. This allows potential creative partners to dive into the various threads of creative process, and gives a quick overview for anyone interested in working together. It’s also a little alarming, as nothing is ‘finished’ and I don’t feel nearly ready to make it public. Yet here I am, writing for you – whoever you are, dear reader – to lay my artistic soul bare.
There was something else. Ah yes, the constraints of the TwoTone platform mean that I have to take a very ‘zen’ approach to the work. Like the Tibetan Monks I saw in New York City back in 1989, drawing sand mandalas. So intricate and beautiful, painstaking work that they released into the river once it was finished. You can’t stay attached to the outcome if you keep working through the process, over and over again.
Also that there is no ONE definitive work that will come from this. Many variations will emerge. And I am starting to make peace with that as part of the creative process.
I think perhaps I had envisaged – or ensounded? – a massive, global, all the bridges playing together event. But honestly, that is only possible as a conceptual frame. If you take even the 29 sensors on the ONE bridge and try to make a piece out of them, the sonic chaos resulting is going to be almost unbearable to listen to. So I need to find ways to pin it back into a context or reason for listening, and connecting. That is, the bridges have to relate to each other in some way, and to my own practice and experience. Otherwise it becomes totally random. I am starting to find more interesting questions through this process. And dealing with technical issues that I hadn’t even considered – like the sheer volume of data generated by a bridge sensor. And the compatibility or otherwise of the various types of data with each other and the systems I need to use for creating sound compositions.
As an example, I have figured out that the selected storm data from the Hardanger Bridge structural monitoring sensors is only available in mat format but the csv files I need are massive and broken down by hour, throughout the day. So I needed to find out exactly what time did this storm hit? Hurricane Nina seems like a good place to start. Around 2-8pm on a Saturday, 10th January 2015 – now I have attempted to open those csv files but their compression is not playing nice with my computer. It takes another level of engagement now to connect with the engineers and find out if they are interested in the sonification process, and how possible it is to switch formats.
I am charmed to discover that the accelerometers used are made by Canterbury Seismic Instruments, in Christchurch New Zealand, where my mother and grandmother were born. Which makes complete sense, given the magnitude and frequency of earthquakes NZ needs to monitor. Cusp-3 Series Strong Motion Accelerographs.
That brings us up to date, and my decision now to try selecting more subtle bridge samples as a starting point, and find out how they sound using the two datasets I am already working with. Then I need to get my head around the generative composition tools and work on mapping out the structure of the piece for the Church of Our Lady.
Thanks to the generous structural monitoring engineers at NTNU, I have access to an incredible range of accelerometer data from the Hardanger Bridge. It only took one more specific search term, and is published under a creative commons (cc-by) license.
Now the fun really starts – downloading the csv files: LowWind, HighFreq; MaxLift, LowFreq, MaxPitch, HighFreq (which I misread as MaxPatch and thought OMG they have sonified it already! Although perhaps they have. I still need to write and make contact) MaxDrag, LowFreq… The monitoring sensors are in place since 2013, there is seven years of data. And the storms – Storm Nina, Storm Ole, Storm Roar, Storm Thor!
Image Credit: NTNU Department of Structural Engineering, Trondheim
Wind and Acceleration Data from the Hardanger Bridge
The dataset consists of long-term wind and acceleration data collected from the Hardanger Bridge monitoring system. The data are collected through wind sensors (anemometers) and accelerometers that are installed on the bridge. The dataset includes both the raw data (in “.csv” format) and the organized data (with “.mat” extension based on hdf5 format). Downloadable zipped folders contain monthly data with different frequency resolutions, special events (storms, etc.) and the raw data. Details on the organization of the data can be found in the readme file and the data paper, both of which can be found in the dataset.
Fenerci, A., Kvåle, K. A., Petersen, Ø. W., Rönnquist, A., & Øiseth, O. A. (2020). Wind and Acceleration Data from the Hardanger Bridge. https://doi.org/10.21400/5NG8980S
Datasets and Weather
Ok I’m breaking this down now – the CSV files are by year and month eg. Raw 2015 1.
Storms happen in January, Storm Nina: 10th Jan 2015, Storm Thor: 29th Jan 2016.
So to focus on the storms, go for the first month. I can’t use their smaller already selected and edited mat files in the data sonification tool. Maybe it’s possible to conver mat to csv? (oh that question that opens up a whole new can of worms!)
And have just discovered that my idea works to replace the audio files in the twotone sampler with my own bridge sounds… except that I have to go through the meticulously and make each NOTE as a bridge sound, as they move up and down on the scale while playing the data. I think that’s enough for today. Back to the sensors.
For now I’m taking the full month raw csv files and parsing them by date. You gotta start somewhere – Storm Nina go!
Mellom bakkar og berg ut med havet heve nordmannen fenge sin heim, der han sjølv heve tuftene grave og sett sjølv sine hus oppå dei.
Han såg ut på dei steinute strender; der var ingen som der hadde bygd. «Lat oss rydja og byggja oss grender, og så eiga me rudningen trygt»
Han såg ut på det bårute havet, der var ruskut å leggja utpå, men der leikade fisk nedi kavet, og den leiken, den ville han sjå.
Fram på vinteren stundom han tenkte: «Gjev eg var i eit varmare land!» Men når vårsol i bakkane blenkte, fekk han hug til si heimlege strand.
Og når liane grønkar som hagar, når det lavar av blomar på strå, og når netter er ljose som dagar, kan han ingen stad venare sjå.
Sud om havet han stundom laut skrida: Der var rikdom på benkjer og bord, men ikring såg han trelldomen kvida og så vende han atter mot nord.
Lat no andre om storleiken kivast, lat deim bragla med rikdom og høgd, mellom kaksar eg inkje kan trivast, mellom jamningar helst er eg nøgd.
Lyd Mellom bakkar og berg ut med havet
BETWEEN HILLS AND MOUNTAINS Ivar Aasen
Between hills and mountains out to sea raise the Norwegian get his home, where he himself raise the tufts dig and put their houses on top of them. He looked out on the rocky beaches; there was no one who had built there. “Let us clear and build our villages, and then own the rudder safely » He looked out on the stretcher sea, there was debris to lay on, but there were fish playing down the cave, and that toy, he wanted to see. Until the winter he sometimes thought: “I wish I were in a warmer country!” But when the spring sun on the slopes shone, he got the urge to say home-grown beach. And when liane greens like pastures, when it blooms of flowers on straw, and when nights are as bright as days, he can see no city venare. South of the sea he sometimes loudly slid: There was wealth on benches and tables, but around he saw the bondage quiver and then he turned north again. Let no others about the size kivast, let them brag with wealth and heights, between cookies I can not thrive, between jams preferably I am satisfied.
Sound: Between hills and mountains out to sea
Wind-induced response of long-span suspension bridges subjected to span-wise non-uniform winds: a case study
Master thesis – NTNU Norwegian University of Science and Technology. Department of Structural Engineering. [E.M. Forbord & H. Hjellvik]
The response has also been predicted using wind data from the Hardanger Bridge, and the predictions have been compared to the measured response. Uniform profiles of wind speed and turbulence have been given different values based on the measured data, more specifically the mean value of all sensors and the value from the midmost wind sensor. It is seen that the choice of value does not affect the accuracy of response predictions. No matter what values are chosen, the predictions are quite inaccurate in general. lntroducing a non-uniform profile of mean wind speed makes the predictions slightly better in some cases, but not noteworthy, and the accuracy is still relatively low. When also including the non-uniformity of turbulence in the response calculations, the predicted response is reduced and the accuracy worsened with respect to the measured response. Accounting for the non-uniformity of self-excited forces shows almost no effect on the predictions. It is concluded that non-uniform wind profiles do not improve the accuracy of predicted bridge response, and that other uncertainties in the calculation methods have larger impact on the predictions than whether the non-uniform profiles are included or not.
2.1 Random Vibration Theory
6.2 Influence of Non-Uniform Turbulence Standard Deviations
In this section, the influence of span-wise non-uniform turbulence standard deviations on the dynamic response will be presented. Three wind speed profiles have been analysed with different turbulence std profiles. The wind speed profiles used are the linear profile and the sinus profile shown in Figure 5.5a and 5.5d, in addition to a uniform wind speed profile. The three different turbulence std profiles shown in Figure 6.15 are studied. They all have the same integrated sum along the span to make them comparable. The two non-uniform turbulence std profiles chosen have the opposite shapes of the wind speed profiles used in this section, because this is often seen in the measurement data from the Hardanger Bridge. Both of these turbulence std profiles will be compared to uniform turbulence standard deviations, for all the three wind speed profiles. The horizontal turbulence std has a span wise mean value of 20% of the wind profile’s mean wind speed, and for the vertical component the corresponding value is 10%. The effect of turbulence std on the response is included in the calculations through the wind spectra, which have a quadratic dependancy on the turbulence std, as shown in Eg. (2.40). The span-wise variation of wind speed is also included in the formula. Therefore, to study the effect of the turbulence std profiles isolated, the response using a uniform wind speed profile and different turbulence std profiles has been calculated. In addition comes the linear and sinus wind profiles, to study if the same turbulence std profiles have different effect on these than on the uniform wind speed profile. The calculated response will only be presented for wind profiles with the mean wind speed of 10 mis, because the trends, the shape and differences of the response along the span are nearly the same for all mean wind speeds for the different wind speed profiles.
6.3 Influence of Non-Uniform Self Excited Forces
To study the influence of span-wise non-uniform self-excited forces on the dynamic response, several wind speed profiles have been numerically tested with both uniform and non-uniform self-excited forces. The non-uniform self-excited forces are caused by the non-uniform wind profile. The re sponse is predicted with uniform self-excited forces where the aerodynamic properties are dependent on the mean wind speed of the wind profile, and with non-uniform self-excited forces where the aero dynamic properties vary along the span with the wind speed. Toen the bridge response in both cases are compared. The wind profiles tested are presented in Figure 5.5. As in section 6.1, the standard de viations of turbulence components are span-wise uniform, such that the influence of the non-uniform self-excited forces are investigated separately. The horizontal and vertical turbulence standard devi ations have been set to 20 and 10%, respectively, of the horizontal mean wind speed.
The influence of the non-uniform turbulence standard deviation has connection to the shape of the wind speeds along the span. As discussed previously, the response shifts to where the wind speeds is largest. The same can be said about the turbulence std. It was seen that the wind is dominating and shifts the response more than the turbulence std, for this particular shapes and ratio between the mean wind speed and standard deviation of the turbulence components. The horizontal shift in the response due to the non-uniform turbulence std comes from the cross-spectral densities of the turbulence components which is high when two points with large turbulence std are considered.
The effect of including the non-uniform self-excited forces on the response increase with the mean wind speed of the wind profile. The difference between the response using non-uniform and uniform self-excited forces are largest for the highest mean wind speeds studied. The lateral response using non-uniform self-excited forces deviates less from the response using uniform self-excited forces compared to the vertical and torsional response. This is due to the aerodynamic derivatives which has been taken as zero. The reason for the large ratios in the vertical and torsional direction is the aerodynamic derivatives that reduce the total damping and stiffness of the structure as mentioned. For lower mean wind speeds, 10-20m/s, the difference is below 10% for all response components.
MuseNet – OpenAI Try MuseNet. We’re excited to see how musicians and non-musicians alike will use MuseNet to create new compositions! In simple mode (shown by default), you’ll hear random uncurated samples that we’ve pre-generated. Choose a composer or style, an optional start of a famous piece, and start generating.openai.com
Magenta Studio – Standalone Continue. Continue uses the predictive power of recurrent neural networks (RNN) to generate notes that are likely to follow your drum beat or melody. Give it an input file and it can extend it by up to 32 measures. This can be helpful for adding variation to a drum beat or creating new material for a melodic track.magenta.tensorflow.org
Making music with magenta.js Step 1: Making sounds with your browser. Everything in @magenta/music is centered around NoteSequences. This is an abstract representation of a series of notes, each with different pitches, instruments and strike velocities, much like MIDI. For example, this is a NoteSequence that represents “Twinkle Twinkle Little Star”. Try changing the pitches to see how the sound changes!hello-magenta.glitch.me
The Musician in the Machine In this article, we’ll look at how we did it. Along the way we’ll listen to some more samples we really loved. Of course, some samples came out great, while some didn’t work as well as we hoped, but overall the project worked beautifully.magenta.tensorflow.org
Getting Started – Magenta Getting started. Ready to play with Magenta? This page will help you get started making music and art with machine learning, and give you some resources if you want to explore on your own!magenta.tensorflow.org
Music Transformer: Generating Music with Long-Term Structure Update (9/16/19): Play with Music Transformer in an interactive colab! Generating long pieces of music is a challenging problem, as music contains structure at multiple timescales, from milisecond timings to motifs to phrases to repetition of entire sections.magenta.tensorflow.org
A simple OpenAI Jukebox tutorial for non-engineers II. Overview and limitations . OpenAI uses a supercomputer to train their models and maybe to generate the songs too, and well, unless you also have a supercomputer or at least a very sweet GPU setup, your creativity will be a bit limited.. When I started playing with Jukebox, I wanted to created 3-minute songs from scratch, which turned out to be more than Google Colab (even with the pro …robertbrucecarter.com
Show notebooks in Drive – Google ColaboratoryColab notebooks allow you to combine executable code and rich text in a single document, along with images, HTML, LaTeX and more. When you create your own Colab notebooks, they are stored in your Google Drive account. You can easily share your Colab notebooks with co-workers or friends, allowing them to comment on your notebooks or even edit them.colab.research.google.com
Show notebooks in Drive – Google Colaboratory Please note: this next upsampling step will take several hours. At the free tier, Google CoLab lets you run for 12 hours. As the upsampling is completed, samples will appear in the Files tab (you can access this at the left of the CoLab), under “samples” (or whatever hps.name is currently).colab.research.google.com
MIMIC CREATIVE AI
MIMIC is a web platform for the artistic exploration of musical machine learning and machine listening. We have designed this collaborative platform as an interactive online coding environment, engineered to bring new technologies in AI and signal processing to artists, composers, musicians and performers all over the world.
The MIMIC platform has a built-in audio engine, machine learning and machine listening tools that makes it easy for creative coders to get started using these techniques in their own artistic projects. The platform also includes various examples of how to integrate external machine learning systems for sound, music and art making. These examples can be forked and further developed by the users of the platform.
Over the next three years, we aim to integrate brand new and developing creative systems into this platform so that they can be more easily used by musicians and artists in the creation of entirely new music, sound, and media, enabling people to understand and apply new computational techniques such as Machine Learning in their own creative work.
MIMIC or “Musically Intelligent Machines Interacting Creatively” is a three year AHRC-funded project, run by teams at Goldsmiths College, Durham University and the University of Sussex.
MIMIC Creative AI Platform MIMIC is a web platform for the artistic exploration of musical machine learning, machine listening and creative AI.mimicproject.com
Intelligent Instruments: a funded ERC project – Sonic Writing The European Research Council has awarded me an ERC Consolidator grant for the project Intelligent Instruments: Understanding 21st-Century AI Through Creative Music Technologies.The five-year, 2 million Euro research project will consist of a team of postdocs, doctoral researchers and an instrument designer from the fields of music, computer science and philosophy. http://www.sonicwriting.org
Mubert’s unique algorithm creates and streams electronic generative music in real time, based on the samples from our extensive database. Every day new samples are added to the stream to support endless and seamless flow of one-of-a-kind work music.
About Mubert
Mubert is an AI music solution for any business, platform & use case. Mubert delivers worldwide copyright-protected AI-generated music via API. Infinite customization, cost-efficiency & legal compliance can help businesses fix key music industry pain points. All music is royalty free & cleared for any cases both for personal & commercial usage.Pricing .01c per minute or $299 per month startups / $1,000 per month large business https://mubert.com/blog/ https://mubert.com/products/streaming/
To facilitate their ability to connect with audiences and make a positive global impact, Mubert is launching a new extension that allows users to play unlimited streams of AI-powered music in their shows without any risks of DMCA takedowns and other copyright issues.
Subscribe with Music for Live Streams to feel your background on YouTube, Twitch, Facebook & 30 other popular services with Chill, Ambient, Trance, and other high-quality music curated by Mubert. https://streamers.mubert.com/DMCA-safe music stations for live streams. Compatible with Youtube, Facebook, Twitch & other streaming services. Premium $4.99 month
AI Composition with Real Instruments | Amper Music Amper’s music is created from scratch using millions of individual samples and thousands of purpose built instruments. You won’t find our proprietary samples anywhere else.www.ampermusic.com
Amper Music: ‘AI’s not going to replace your job – it’s going to change your job’Does this mean Amper is going to use part of that new funding round to build more tools for artists to create with AI? Silverstein is careful not to commit to anything in the near future, saying that as a small and growing company, Amper has to be “very judicious” in how it allocates resources for now.musically.com
How AI-generated music is changing the way hits are made Music-making AI software has advanced so far in the past few years that it’s no longer a frightening novelty; it’s a viable tool. For the second episode of The Future of Music, I went to LA to visit the offices of AI platform Amper Music and the home of Taryn Southern, a pop artist who is working with Amper and other AI platforms to co …www.theverge.com
How technology can democratize music From the printing press to the digital camera, innovation has often democratized the creative arts. In this forward-looking talk, music producer Drew Silverstein demos a new software that allows anyone to create professional-grade music without putting human musicians out of work.www.ted.com
Track licenses are available for purchase at a few different tiers based on the intended usage. All licenses (regardless of tier) are royalty-free, permit global distribution of content, and are valid in perpetuity.
Personal License — $29 (This tier is meant for your personal or educational project needs. The licensing does not cover ad spend or promotions. For example, a video made as a hobby.)
Enterprise Basic License — $74 (This tier is meant for internal or external professional projects and cannot be supported with an ad spend. For example, an internal training video that will be shared within your company only or public tutorial content for latest feature release.)
Branded Content License — $399 (This tier is meant for professional projects that will be posted on your own social channel or website and can be supported with an ad spend. For example, a YouTube video on your channel.)
Online Ad License — $1,199 (This tier is meant for professional projects that can be both used in ads and supported with an ad spend. For example, a video that will run as a YouTube pre-roll or Instagram ad.)
All Media/Multimedia — Request a quote (This tier includes a combination of the above plus additional licensing needs. Please contact us so that we can evaluate your use-case and provide a quote.)
What does Amper do?
Amper is an AI music company. We develop enterprise products to help people make music using our AI Composer technology. Today we offer two products—our music creation platform Score, and an API that allows companies to integrate our music composition capabilities into their own tools. What is Score?
Score is a tool for content creators to quickly make music to accompany videos, podcasts, games, and other types of content using our AI Composer. Score is designed to significantly reduce the time it takes to source music and adapt it to fit a particular project. Who is Score intended for?
Score was built for businesses who create a lot of content and are looking for ways to source high quality music more efficiently. Video editors, podcast producers, and video game designers can all benefit from Score’s capabilities. How is Score different from stock music sites?
Each track Score outputs is composed by our AI in real-time and is always custom to your project. Collaborating with Score allows you to tailor a broad variety of your track’s musical attributes, including length, structure, genre, mood, instrumentation, and tempo.
Additionally, all the sounds you hear in Score are samples of real instruments recorded at Amper’s Los Angeles studio. Unlike stock music, which is often made using widely available sample “packs”, Score’s sounds are proprietary. This makes Amper’s music truly unique.
The Best Free Live Streaming Software on Windows and Mac | StreamlabsThe most popular streaming platform for Twitch, YouTube and Facebook. Cloud-based and used by 70% of Twitch. Grow with Streamlabs Open Broadcast Software (OBS), alerts, 1000+ overlays, analytics, chatbot, tipping, merch and more.streamlabs.com
Music Ally Is A Knowledge Company NEW! Learn. Music Ally has launched a brand new Learning Hub for the music industry, with more than 30 modules of certified video content at launch, combined with relevant supporting materials from the rest of Music Ally’s information and insight.musically.com
Every tab open on my screen in the quest to figure out how to write generative music… Some very useful tutorials (including an ableton live project with all the settings – just need to upload bridge samples) and one slightly concerning automatic music generator which seems to be based on avoiding paying copyright to artists. So not down with that, but curious about what it does. Also the generative.fm station of endless compositions which I find quite soothing, some random plugins and synths.
Starting with a few examples from the beautiful composition by Loscil based on ghist ships, to an excellent visual presentation taking you through the genesis of generative music, a few nice images, uiges and software possibilities to explore.
For his 1965 composition “It’s Gonna Rain”, Steve Reich designed a generative mechanical system.
It was a system of two tape recorders.
Reich had made field recordings of speech on tape.
He made two short identical loops from his recordings. One for each tape recorder.
The tape recorders were set to slightly different speeds. One would play a little faster than the other.
This started a phasing process. A continuously changing soundscape was generated from the short loops.
Though I may have the pleasure of discovering musical processes and composing the musical material to run through them, once the process is set up and loaded it runs by itself.
Nodal is generative software for composing music, interactive real-time improvisation, and a musical tool for experimentation and play. Nodal uses a new method for creating and exploring musical patterns, probably unlike anything you’ve used before. You can play sounds using Nodal’s built-in synthesiser or any MIDI compatible hardware or software instrument.
Nodal is based around the concept of a user-defined network. The network consists of nodes (musical events) and edges (connections between events). You interactively define the network, which is then automatically traversed by any number of virtual players. Players play their instruments according to the notes specified in each node. The time taken to travel from one node to another is based on the length of the edges that connect the nodes. Nodal allows you to create complex, changing sequences using just a few simple elements. Its unique visual representation allows you to edit and interact with the music generating system as the composition plays.
Nodal is compatible with DAW software such as Ableton Live, Logic Studio, Digital Performer and Garage Band. It can transmit and receive MIDI sync. You can edit a network using the mouse and keyboard, and optionally a MIDI keyboard. Nodal recognises and sends notes, sync, continuous controller and pitch bend information.
Nodal is developed at SensiLab, Monash University.
How to Create Amazing Generative Music using Ableton Live and Max Devices – Tutorial
Fantastic Devices and how to use them.. to create surprising generative jams with hardware or soft synths, inside Ableton Live. I can just listen to this for hours! And it can be your best solution if you need to create a lot of musical content in a short span of time. The inspiration for this Jam/Tutorial came when i started using the Modular Sequencer, by Sound Manufacture. I find it a smart and useful device, as it easily allows to do things that would require tons of other devices, and a lot of time to make them work in Ableton Live. Plus it has its own unique features, a lot of them! And it can also communicate with other Sound Manufacture devices, to considerably expand the range of options. Here you can find the devices shown in this video. Modular Sequencer: https://www.soundmanufacture.net/modu…
Here is the Template Project download link, you’ll find the project in the zip file contained in the Album download. Just set price as ZERO and download for free
As shown in the video, i prepared a streamlined version of my jam, with just stock and free devices. In this Ableton Template Project folder you can find: – An Ableton Live 10 Project containing everything – Instrument Racks with Midi effects to generate random notes and chords – A Drum Rack with Midi effects to generate random rhythms, plus a kit of custom analog drum samples made by me, using my analog synths – All the tracks used in my jams, featuring all the modulation i created, but without the third party devices and effects.
Here is a brief tutorial on making generative music with Live. This tutorial first appeared on the Ableton Live Forum, in 2007 – so the examples were made using Live 6.07 and will work with any version of live later than that.
What is generative music?
Generative music is where you provide logical or programmatic ‘seeds’ and the computer grows you some music based on the parameters you set out, Brian Eno is probably the most famous practitioner.
Why make generative music, or get Live to make IT for you?
Generative music is a different beast from making a track of your own, it is more like planting a garden. In fact a generative piece is like a glorified wind chime, so we could equally ask ourselves “why do people have wind chimes rather than stand in the garden hitting aluminium themselves?” The answer would be the same – the sounds which result may not be “music” but they can be good background noise and in that way quite beautiful and surprisingly interesting as ambience, furthermore the underlying generation can be tinkered with to deliver a wide range of what appears to be expression. A generative piece will sustain a low level of interest for hours!
Live is quite a good environment for creating generative music and I have two methods to do so, an audio based method and a midi method.
I will focus on the more midi-oriented method here. There are limitations to how far you can go with Live and generative music, but what you can achieve is entertaining.
How it is achieved in Live
To make generative music we need to make Live play or do something whenever a condition is met, we get flexibility by giving the program some freedom. Instead of saying “EVERY time a bar starts play a C minor chord”, we want variation. An example might be “Sometimes play a chord (from a selection of chords) on either the first or third beat, and if you do then perhaps play one of these ralated chords after it, or perhaps think about playing this tune instead”
So now we have a random event which is constrained by a limited set of outcomes, it sounds passably like music.
Cascading The Variations
I set a ‘condition’ with two Velocity plugins, and how we can set two different outcomes using ‘if – else’. Now imagine dividing the random values up into many zones, this way you can create little themic areas. You can start to go further down the fractal tree, each conditional zone can have a new random value generated to make new notes for itself. Each ‘conditional zone’ can be a different part of your song. The ‘riff’ , the ‘bassline’ , the ‘chords’ . Each of them can watch a zone and do some more complicated ‘riff’ or ‘chord’ related actions anytime that rack is triggered by the main condition.
The video takes inspiration from Brian Eno‘s concept of Generative Music. Eno has been creating systems for generating music since the 70’s.
While he initially applied this approach to ambient music, on albums like Music For Airports, his later work has explored creating systems for generating other types of music, too.
This video looks at exploring this concept, using a variety of hardware and software systems, ranging from iOS apps to desktop DAWs and modular synthesizers.
After making generative music systems on generative.fm for the better part of a year now, I’ve received numerous requests for an explanation of how I create the systems featured on the site. Rather than replying to everyone individually, I thought it would be best to explain my process here for anyone interested.
generative.fm
Web Audio API
The Web Audio API is a relatively new browser API which is well supported. It enables web developers to play, synthesize, control, process, and record audio in the browser. As you can probably guess, this is the linchpin technology I use to create browser-based generative music systems. Boris Smus wrote an excellent, short book on the subject titled Web Audio API: Advanced Sound for Games and Interactive Appswhich I recommend.
Tone.js
Tone.js is a JavaScript framework for using the Web Audio API. It offers an abstract layer on top of the Web Audio API which should be familiar to musicians and producers, with a vast array of synthesizers, effects, and filters, as well as related utility functions for things like converting scientifically notated pitches to their frequencies and back. Additionally, it greatly simplifies access to an accurate timing system. While this library is not strictly necessary for making generative music systems in the browser, I’ve never built one without it. It’s very rare that I find myself interacting directly with the Web Audio API rather than using this fantastic library.
It’s certainly possible to synthesize sounds with Tone.js and the Web Audio API, but it’s not something I’ve explored much (read: I suck at it). Instead, I prefer to use recorded audio samples which I play and manipulate.
In addition to using sample libraries, sometimes I record my own audio samples for use on the site. I record with a Rhode NT1-A microphone or direct from my Line 6 POD HD500X into a Focusrite Scarlett 2i4. This is all relatively cheap gear which I purchased used. Occasionally when I record, I reconstruct my “recording booth” which I designed and made out of PVC pipe and movers’ blankets to dampen sound. Though, I usually can’t be bothered.
Tonal
tonal is another JavaScript library providing utility functions related to music theory. While not every piece requires this library, it’s invaluable for the ones that do. The library contains all sorts of helpful functions which do things like returning all the notes or intervals in a given chord or scale, inverting chords, transposing notes and intervals up or down a given amount of semitones, and so much more.
Creative Commons License
For anyone and everyone
Music from Generative.fm is officially licensed under a Creative Commons Attribution 4.0 International License (CC BY 4.0). Generally, you are welcome to use or modify the music for any purpose, including commercially or publicly, provided you give appropriate credit.
It’s as easy as adding something like this to your work:
Okay, let’s get this out of the way – no one should describe this as an “AI DJ.” There is no autonomous machine intelligence acting as a DJ. On the contrary, the mushy digital mash-up textures on offer here are unique, fresh, and distinctively sound like something that came from Moisés. Part analysis, part generative artwork, part creative remix, OÍR is simultaneously the intentional work of an artist and a machine reflection of a wide variety of streamed DJ sets.
Technically speaking, says Moisés, “the system is a compendium of OpenAI’s Jukebox, trained from scratch, StyleGAN2 for visuals.” “The mixing and DJ ‘transitions’ are done with a MIR [Music InformatioN Retrieval] ‘automatic mixing’ Python script,” he says.
But it’s worthwhile also understanding his artistic intention:
OÍR stems from my ongoing research on AI, sound synthesis, and electronic music.
Since starting my adventure into Deep Learning systems for music a couple of years ago, I’ve asked myself if Deep Learning (AI) is a tool or a medium? Right now I’ve come to the conclusion that it can be both, and this is what exactly I’m trying to explore with this project.
When we talk about Deep Learning as medium, there are three particular processes engaged when working with generative systems: curation of the data, training and monitoring of the algorithm as it ‘learns,’ and generating new synthetic media. Rinse and repeat.
There are a couple of aspects that interest me from this process. Each time you train the AI algorithm, its weights and biases or what it has ‘learned’ change over time — depending on the data you are having it learn from. The algorithm generates patterns present in these vast amounts of images and music, as is the case of OÍR, and these can be changing as the ‘learning’ process continues.
So this quality of a constantly changing and morphing generative algorithm is exactly what I want to explore with OÍR, and what better way to do it than though electronic dance music and techno culture.
I chose a channel as the canvas for the first episode, or EPOCH, of OÍR with a selection from the archive from HÖR Berlin, because I feel this channel has done the amazing job of generating a collective culture, specifically within techno and electronic music. I wanted to explore which patterns are emerging from this culture – which patterns can be synthesized, both visual and sonic, from all these sets and different approximations of techno, over 1,400+ hours and counting.
My desire with this art project is not to automatize or replace DJ’s or electronic musicians in any way, but rather have OÍR be a sort of ‘live generative archive’, as I did before with my album 𝕺𝖐𝖆𝖈𝖍𝖎𝖍𝖚𝖆𝖑𝖎 in relation to the Mexican 3ball electronic music genre, of certain cultural moments in electronic music which are increasingly existing on big tech platforms and the internet. By the way, OÍR means “to listen” in Spanish.
MIMIC is a web platform for the artistic exploration of musical machine learning and machine listening. We have designed this collaborative platform as an interactive online coding environment, engineered to bring new technologies in AI and signal processing to artists, composers, musicians and performers all over the world.
The MIMIC platform has a built-in audio engine, machine learning and machine listening tools that makes it easy for creative coders to get started using these techniques in their own artistic projects. The platform also includes various examples of how to integrate external machine learning systems for sound, music and art making. These examples can be forked and further developed by the users of the platform.
Here is collection of howtos and tutorials in many different languages covering a number of different topics. The following topics has been suggested to merge in to the below list: basic audio, audio synths, audio filters, video effects, video synths, 3D graphics, interfacing with the physical world (HID, arduino, etc), network programming.
SodaLib: a data sonification framework for creative coding environments Agoston Nagy
SodaLib
In the ever-growing area of data driven interfaces (embedded systems, social activities), it becomes more important to have effective methods to analyze complex data sets, observing them from different perspectives, understanding their features and dimensions, accessing, interpreting and mapping them in meaningful ways. With SodaLib, it is easy to map live data (sensors, web apis, etc), large, prerecorded datasets (tables, logs, excel les), or even unusual sources (images, 3D environments) to recognizable audible patterns through a set of sonifcation methods, including parameter mapping and event based sonification.
SodaLib is an open source, cross platform, multipurpose sonification tool for designers, programmers and creative practitioners.
Sonification can be used to hear information in a set of data that might be otherwise difficult to perceive, common examples include Geiger counters, sonar and medical monitoring [ECG]. When creating sonification algorithmic compositions we are interested in creating interesting or aesthetically pleasing sounds and music by mapping non-musical data directly to musical parameters.