Generative Music Dive

Every tab open on my screen in the quest to figure out how to write generative music… Some very useful tutorials (including an ableton live project with all the settings – just need to upload bridge samples) and one slightly concerning automatic music generator which seems to be based on avoiding paying copyright to artists. So not down with that, but curious about what it does. Also the generative.fm station of endless compositions which I find quite soothing, some random plugins and synths.

Starting with a few examples from the beautiful composition by Loscil based on ghist ships, to an excellent visual presentation taking you through the genesis of generative music, a few nice images, uiges and software possibilities to explore.

LOSCIL ADRIFT

http://loscil.ca/http://loscil.ca/adrift/

HOW GENERATIVE MUSIC WORKS: A PERSPECTIVE

This presentation is about making music
by designing systems that make music.

https://teropa.info/loop/#/title

For his 1965 composition “It’s Gonna Rain”, Steve Reich designed a generative mechanical system.

It was a system of two tape recorders.

Reich had made field recordings of speech on tape.

He made two short identical loops from his recordings. One for each tape recorder.

The tape recorders were set to slightly different speeds.
One would play a little faster than the other.

This started a phasing process. A continuously changing soundscape was generated from the short loops.

Though I may have the pleasure of discovering musical processes and composing the musical material to run through them, once the process is set up and loaded it runs by itself.

Steve Reich, Music as a Gradual Process, 1968.

On And On And On: A Guide to Generative Electronic Music

By Samuel Tornow · July 19, 2019

https://daily.bandcamp.com/lists/generative-music-guide

GENERATIVE MUSIC TOOLS

NODAL

https://en.wikipedia.org/wiki/Nodal_(software)

Nodal is generative software for composing music, interactive real-time improvisation, and a musical tool for experimentation and play. Nodal uses a new method for creating and exploring musical patterns, probably unlike anything you’ve used before. You can play sounds using Nodal’s built-in synthesiser or any MIDI compatible hardware or software instrument.

Nodal is based around the concept of a user-defined network. The network consists of nodes (musical events) and edges (connections between events). You interactively define the network, which is then automatically traversed by any number of virtual players. Players play their instruments according to the notes specified in each node. The time taken to travel from one node to another is based on the length of the edges that connect the nodes. Nodal allows you to create complex, changing sequences using just a few simple elements. Its unique visual representation allows you to edit and interact with the music generating system as the composition plays.

Nodal is compatible with DAW software such as Ableton Live, Logic Studio, Digital Performer and Garage Band. It can transmit and receive MIDI sync. You can edit a network using the mouse and keyboard, and optionally a MIDI keyboard. Nodal recognises and sends notes, sync, continuous controller and pitch bend information.

Nodal is developed at SensiLab, Monash University.

Melda Production VST Convolution

https://www.meldaproduction.com/MFreeFXBundle

Modular Synth Sequencer

https://www.soundmanufacture.net/modularsequencer/

Generative Modular Music

How to Create Amazing Generative Music using Ableton Live and Max Devices – Tutorial

Fantastic Devices and how to use them.. to create surprising generative jams with hardware or soft synths, inside Ableton Live. I can just listen to this for hours! And it can be your best solution if you need to create a lot of musical content in a short span of time. The inspiration for this Jam/Tutorial came when i started using the Modular Sequencer, by Sound Manufacture. I find it a smart and useful device, as it easily allows to do things that would require tons of other devices, and a lot of time to make them work in Ableton Live. Plus it has its own unique features, a lot of them! And it can also communicate with other Sound Manufacture devices, to considerably expand the range of options. Here you can find the devices shown in this video. Modular Sequencer: https://www.soundmanufacture.net/modu…

Chord-o-mat: https://www.soundmanufacture.net/chor…

Generative Music Techniques With Ableton Live

Here is the Template Project download link, you’ll find the project in the zip file contained in the Album download. Just set price as ZERO and download for free

As shown in the video, i prepared a streamlined version of my jam, with just stock and free devices. In this Ableton Template Project folder you can find: – An Ableton Live 10 Project containing everything – Instrument Racks with Midi effects to generate random notes and chords – A Drum Rack with Midi effects to generate random rhythms, plus a kit of custom analog drum samples made by me, using my analog synths – All the tracks used in my jams, featuring all the modulation i created, but without the third party devices and effects.

Wave JUNCTION Synth

https://sonicstate.com/shop/wj/

Generating Random Music In Ableton Live

Steve Angstrom: Generative Music Tutorial

Generative music in Live

Here is a brief tutorial on making generative music with Live. This tutorial first appeared on the Ableton Live Forum, in 2007 – so the examples were made using Live 6.07  and will work with any version of live later than that.

What is generative music?

Generative music is where you provide logical or programmatic ‘seeds’ and the computer grows you some music based on the parameters you set out, Brian Eno is probably the most famous practitioner.

Why make generative music, or get Live to make IT for you?

Generative music is a different beast from making a track of your own, it is more like planting a garden. In fact a generative piece is like a glorified wind chime, so we could equally ask ourselves “why do people have wind chimes rather than stand in the garden hitting aluminium themselves?”  The answer would be the same – the sounds which result may not be “music” but they can be good background noise and in that way quite beautiful and surprisingly interesting as ambience, furthermore the underlying generation can be tinkered with to deliver a wide range of what appears to be expression. A generative piece will sustain a low level of interest for hours!

Live is quite a good environment for creating generative music and I have two methods to do so, an audio based method and a midi method.

I will focus on the more midi-oriented method here.
There are limitations to how far you can go with Live and generative music, but what you can achieve is entertaining.

How it is achieved in Live

To make generative music we need to make Live play or do something whenever a condition is met, we get flexibility by giving the program some freedom. Instead of saying “EVERY time a bar starts play a C minor chord”, we want variation. An example might be “Sometimes play a chord (from a selection of chords) on either the first or third beat, and if you do then perhaps play one of these ralated chords after it, or perhaps think about playing this tune instead”

So now we have a random event which is constrained by a limited set of outcomes, it sounds passably like music.

Cascading The Variations

I set a ‘condition’ with two Velocity plugins, and how we can set two different outcomes using ‘if – else’. Now imagine dividing the random values up into many zones, this way you can create little themic areas.
You can start to go further down the fractal tree, each conditional zone can have a new random value generated to make new notes for itself. Each ‘conditional zone’ can be a different part of your song. The ‘riff’ , the ‘bassline’ , the ‘chords’ . Each of them can watch a zone and do some more complicated ‘riff’ or ‘chord’ related actions anytime that rack is triggered by the main condition.

READ FULL TUTORIAL HERE

SYNTHOPIA TECHNIQUES FOR GENERATIVE MUSIC

The video takes inspiration from Brian Eno‘s concept of Generative Music. Eno has been creating systems for generating music since the 70’s.

While he initially applied this approach to ambient music, on albums like Music For Airports, his later work has explored creating systems for generating other types of music, too.

This video looks at exploring this concept, using a variety of hardware and software systems, ranging from iOS apps to desktop DAWs and modular synthesizers.

By Synthhead

GENERATIVE FM

Play eternally evolving ambient music on generative.fm

How to Make Generative Music in the Browser

My personal process by Alex Bainter on MEDIUM

After making generative music systems on generative.fm for the better part of a year now, I’ve received numerous requests for an explanation of how I create the systems featured on the site. Rather than replying to everyone individually, I thought it would be best to explain my process here for anyone interested.

generative.fm

Web Audio API

The Web Audio API is a relatively new browser API which is well supported. It enables web developers to play, synthesize, control, process, and record audio in the browser. As you can probably guess, this is the linchpin technology I use to create browser-based generative music systems. Boris Smus wrote an excellent, short book on the subject titled Web Audio API: Advanced Sound for Games and Interactive Apps which I recommend.

Tone.js

Tone.js is a JavaScript framework for using the Web Audio API. It offers an abstract layer on top of the Web Audio API which should be familiar to musicians and producers, with a vast array of synthesizers, effects, and filters, as well as related utility functions for things like converting scientifically notated pitches to their frequencies and back. Additionally, it greatly simplifies access to an accurate timing system. While this library is not strictly necessary for making generative music systems in the browser, I’ve never built one without it. It’s very rare that I find myself interacting directly with the Web Audio API rather than using this fantastic library.

I highly recommend “JavaScript Systems Music” by Tero Parviainen as an introduction to creating music in the browser with Tone.js and the Web Audio API.

Samples

It’s certainly possible to synthesize sounds with Tone.js and the Web Audio API, but it’s not something I’ve explored much (read: I suck at it). Instead, I prefer to use recorded audio samples which I play and manipulate.

There are plenty of libraries full of free or cheap audio samples out there, but the most significant ones I’ve used at the time of writing are the Community Edition of Versilian Studios Chamber Orchestra 2, the Versilian Community Sample Library, and the Sonatina Symphonic Orchestra. The generosity of the providers of these and other free libraries inspires me to release my work for free as well.

In addition to using sample libraries, sometimes I record my own audio samples for use on the site. I record with a Rhode NT1-A microphone or direct from my Line 6 POD HD500X into a Focusrite Scarlett 2i4. This is all relatively cheap gear which I purchased used. Occasionally when I record, I reconstruct my “recording booth” which I designed and made out of PVC pipe and movers’ blankets to dampen sound. Though, I usually can’t be bothered.

Tonal

tonal is another JavaScript library providing utility functions related to music theory. While not every piece requires this library, it’s invaluable for the ones that do. The library contains all sorts of helpful functions which do things like returning all the notes or intervals in a given chord or scale, inverting chords, transposing notes and intervals up or down a given amount of semitones, and so much more.

Creative Commons License

For anyone and everyone

Music from Generative.fm is officially licensed under a Creative Commons Attribution 4.0 International License (CC BY 4.0). Generally, you are welcome to use or modify the music for any purpose, including commercially or publicly, provided you give appropriate credit.

It’s as easy as adding something like this to your work:

Or even just:

See “How to give attribution” from Creative Commons. Please pretend to read the official licensing terms before using music from this service.

You can show your appreciation by making a donation.

Other Licensing Arrangements

For special cases and people with fancy pants

If you prefer not to use the Creative Commons licensing, send an email with your offer to alex@alexbainter.com.

SOUND DESIGN FOR AMBIENT MUSIC

https://www.soundonsound.com/techniques/sound-design-ambient-music

GENERATIVE MUSIC WIKIPEDIA

https://en.wikipedia.org/wiki/Generative_music

THIS IS NOT AN AI DJ

Okay, let’s get this out of the way – no one should describe this as an “AI DJ.” There is no autonomous machine intelligence acting as a DJ. On the contrary, the mushy digital mash-up textures on offer here are unique, fresh, and distinctively sound like something that came from Moisés. Part analysis, part generative artwork, part creative remix, OÍR is simultaneously the intentional work of an artist and a machine reflection of a wide variety of streamed DJ sets.

Technically speaking, says Moisés, “the system is a compendium of OpenAI’s Jukebox, trained from scratch, StyleGAN2 for visuals.” “The mixing and DJ ‘transitions’ are done with a MIR [Music InformatioN Retrieval] ‘automatic mixing’ Python script,” he says.

But it’s worthwhile also understanding his artistic intention:

OÍR stems from my ongoing research on AI, sound synthesis, and electronic music.

Since starting my adventure into Deep Learning systems for music a couple of years ago, I’ve asked myself if Deep Learning (AI) is a tool or a medium?
Right now I’ve come to the conclusion that it can be both, and this is what
exactly I’m trying to explore with this project.

When we talk about Deep Learning as medium, there are three particular
processes engaged when working with generative systems: curation of the data, training and monitoring of the algorithm as it ‘learns,’ and generating new synthetic media. Rinse and repeat.

There are a couple of aspects that interest me from this process. Each time you train the AI algorithm, its weights and biases or what it has ‘learned’ change over time — depending on the data you are having it learn from. The algorithm generates patterns present in these vast amounts of images and music, as is the case of OÍR, and these can be changing as the ‘learning’ process continues.

So this quality of a constantly changing and morphing generative algorithm is exactly what I want to explore with OÍR, and what better way to do it than though electronic dance music and techno culture.

I chose a channel as the canvas for the first episode, or EPOCH, of OÍR with a selection from the archive from HÖR Berlin, because I feel this channel has done the amazing job of generating a collective culture, specifically within techno and electronic music. I wanted to explore which patterns are emerging from this culture – which patterns can be synthesized, both visual and sonic, from all these sets and different approximations of techno, over 1,400+ hours and counting.

My desire with this art project is not to automatize or replace DJ’s or
electronic musicians in any way, but rather have OÍR be a sort of ‘live generative archive’, as I did before with my album 𝕺𝖐𝖆𝖈𝖍𝖎𝖍𝖚𝖆𝖑𝖎 in relation to the Mexican 3ball electronic music genre, of certain cultural moments in electronic music which are increasingly existing on big tech platforms and the internet. By the way, OÍR means “to listen” in Spanish.

MUBERT AI MUSIC

INTRODUCTION TO MUBERT AI GENERATIVE MUSIC

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s