Machine Creativity

Literally every tab I have open on my screen in the quest to figure out how to write generative music… Part of ongoing research.

LOSCIL ADRIFT

http://loscil.ca/http://loscil.ca/adrift/

HOW GENERATIVE MUSIC WORKS: A PERSPECTIVE

This presentation is about making music
by designing systems that make music.

https://teropa.info/loop/#/title

For his 1965 composition “It’s Gonna Rain”, Steve Reich designed a generative mechanical system.

It was a system of two tape recorders.

Reich had made field recordings of speech on tape.

He made two short identical loops from his recordings. One for each tape recorder.

The tape recorders were set to slightly different speeds.
One would play a little faster than the other.

This started a phasing process. A continuously changing soundscape was generated from the short loops.

Though I may have the pleasure of discovering musical processes and composing the musical material to run through them, once the process is set up and loaded it runs by itself.

Steve Reich, Music as a Gradual Process, 1968.

On And On And On: A Guide to Generative Electronic Music

By Samuel Tornow · July 19, 2019

https://daily.bandcamp.com/lists/generative-music-guide

NICE IMAGES FOR GENRATIVE MUSIC

Some very useful tutorials (including an ableton live project with all the settings – just need to upload bridge samples) and one slightly concerning automatic music generator which seems to be based on avoiding paying copyright to artists. So not down with that, but curious about what it does. Also the generative.fm station of endless compositions, quite soothing. Actually, we’re going to start with some random plugins and synths.

NODAL

https://en.wikipedia.org/wiki/Nodal_(software)

Nodal is generative software for composing music, interactive real-time improvisation, and a musical tool for experimentation and play. Nodal uses a new method for creating and exploring musical patterns, probably unlike anything you’ve used before. You can play sounds using Nodal’s built-in synthesiser or any MIDI compatible hardware or software instrument.

Nodal is based around the concept of a user-defined network. The network consists of nodes (musical events) and edges (connections between events). You interactively define the network, which is then automatically traversed by any number of virtual players. Players play their instruments according to the notes specified in each node. The time taken to travel from one node to another is based on the length of the edges that connect the nodes. Nodal allows you to create complex, changing sequences using just a few simple elements. Its unique visual representation allows you to edit and interact with the music generating system as the composition plays.

Nodal is compatible with DAW software such as Ableton Live, Logic Studio, Digital Performer and Garage Band. It can transmit and receive MIDI sync. You can edit a network using the mouse and keyboard, and optionally a MIDI keyboard. Nodal recognises and sends notes, sync, continuous controller and pitch bend information.

Nodal is developed at SensiLab, Monash University.

Melda Production VST Convolution

https://www.meldaproduction.com/MFreeFXBundle

Modular Synth Sequencer

https://www.soundmanufacture.net/modularsequencer/

As used in this tutorial

Generative Modular Music

How to Create Amazing Generative Music using Ableton Live and Max Devices – Tutorial

Fantastic Devices and how to use them.. to create surprising generative jams with hardware or soft synths, inside Ableton Live. I can just listen to this for hours! And it can be your best solution if you need to create a lot of musical content in a short span of time. The inspiration for this Jam/Tutorial came when i started using the Modular Sequencer, by Sound Manufacture. I find it a smart and useful device, as it easily allows to do things that would require tons of other devices, and a lot of time to make them work in Ableton Live. Plus it has its own unique features, a lot of them! And it can also communicate with other Sound Manufacture devices, to considerably expand the range of options. Here you can find the devices shown in this video. Modular Sequencer: https://www.soundmanufacture.net/modu…​ Chord-o-mat: https://www.soundmanufacture.net/chor…

Here is the Template Project download link, you’ll find the project in the zip file contained in the Album download. Just set price as ZERO and download for free:

https://belibat.bandcamp.com/album/generative-jam-tutorial-ableton-live-template-project

Generative Music Techniques With Ableton Live

Wave JUNCTION Synth

https://sonicstate.com/shop/wj/

Another tutorial

http://angstromnoises.com/tutorial-generative-music-ableton-live/

Generating Random Music In Ableton Live

SYNTHOPIA TECHNIQUES FOR GENERATIVE MUSIC

GENERATIVE FM

https://play.generative.fm/browse

Making Generative Music in the Browser

My personal process

Alex Bainter

Alex BainterMar 26, 2019·9 min read

Creative Commons License

For anyone and everyone

Music from Generative.fm is officially licensed under a Creative Commons Attribution 4.0 International License (CC BY 4.0). Generally, you are welcome to use or modify the music for any purpose, including commercially or publicly, provided you give appropriate credit.

It’s as easy as adding something like this to your work:

Or even just:

See “How to give attribution” from Creative Commons. Please pretend to read the official licensing terms before using music from this service.

You can show your appreciation by making a donation.

Other Licensing Arrangements

For special cases and people with fancy pants

If you prefer not to use the Creative Commons licensing, send an email with your offer to alex@alexbainter.com.

INTRODUCTION TO GENERATIVE MUSIC

SOUND DESIGN FOR AMBIENT MUSIC

https://www.soundonsound.com/techniques/sound-design-ambient-music

GENERATIVE MUSIC WIKIPEDIA

https://en.wikipedia.org/wiki/Generative_music

THIS IS NOT AN AI DJ

Okay, let’s get this out of the way – no one should describe this as an “AI DJ.” There is no autonomous machine intelligence acting as a DJ. On the contrary, the mushy digital mash-up textures on offer here are unique, fresh, and distinctively sound like something that came from Moisés. Part analysis, part generative artwork, part creative remix, OÍR is simultaneously the intentional work of an artist and a machine reflection of a wide variety of streamed DJ sets.

Technically speaking, says Moisés, “the system is a compendium of OpenAI’s Jukebox, trained from scratch, StyleGAN2 for visuals.” “The mixing and DJ ‘transitions’ are done with a MIR [Music InformatioN Retrieval] ‘automatic mixing’ Python script,” he says.

But it’s worthwhile also understanding his artistic intention:

OÍR stems from my ongoing research on AI, sound synthesis, and electronic music.

Since starting my adventure into Deep Learning systems for music a couple of years ago, I’ve asked myself if Deep Learning (AI) is a tool or a medium?
Right now I’ve come to the conclusion that it can be both, and this is what
exactly I’m trying to explore with this project.

When we talk about Deep Learning as medium, there are three particular
processes engaged when working with generative systems: curation of the data, training and monitoring of the algorithm as it ‘learns,’ and generating new synthetic media. Rinse and repeat.

There are a couple of aspects that interest me from this process. Each time you train the AI algorithm, its weights and biases or what it has ‘learned’ change over time — depending on the data you are having it learn from. The algorithm generates patterns present in these vast amounts of images and music, as is the case of OÍR, and these can be changing as the ‘learning’ process continues.

So this quality of a constantly changing and morphing generative algorithm is exactly what I want to explore with OÍR, and what better way to do it than though electronic dance music and techno culture.

I chose a channel as the canvas for the first episode, or EPOCH, of OÍR with a selection from the archive from HÖR Berlin, because I feel this channel has done the amazing job of generating a collective culture, specifically within techno and electronic music. I wanted to explore which patterns are emerging from this culture – which patterns can be synthesized, both visual and sonic, from all these sets and different approximations of techno, over 1,400+ hours and counting.

My desire with this art project is not to automatize or replace DJ’s or
electronic musicians in any way, but rather have OÍR be a sort of ‘live generative archive’, as I did before with my album 𝕺𝖐𝖆𝖈𝖍𝖎𝖍𝖚𝖆𝖑𝖎 in relation to the Mexican 3ball electronic music genre, of certain cultural moments in electronic music which are increasingly existing on big tech platforms and the internet. By the way, OÍR means “to listen” in Spanish.

MUBERT AI MUSIC

These are some of the resources I will be exploring in the course of my research:

AI MUSIC, DATA SONIFICATION aLGORITHMIC COMPOSItion +

AI Music, Algorithmic Composition, Data Sonification, Machine Creativity

  • Tutorials, guides and how-to courses
  • Examples of Algorithmic Composition
  • Projects using Machine learning + creativity / AI Music

FUTURE LEARN: CREATIVE MACHINE LEARNING

https://www.futurelearn.com/courses/apply-creative-machine-learning

MIMIC

Make music and art with machine intelligence

MIMIC is a web platform for the artistic exploration of musical machine learning and machine listening. We have designed this collaborative platform as an interactive online coding environment, engineered to bring new technologies in AI and signal processing to artists, composers, musicians and performers all over the world.

The MIMIC platform has a built-in audio engine, machine learning and machine listening tools that makes it easy for creative coders to get started using these techniques in their own artistic projects. The platform also includes various examples of how to integrate external machine learning systems for sound, music and art making. These examples can be forked and further developed by the users of the platform.

https://mimicproject.com/about

puredata tutorials

https://puredata.info/docs/tutorials

Pd Tutorials and HOWTOs?

Here is collection of howtos and tutorials in many different languages covering a number of different topics. The following topics has been suggested to merge in to the below list: basic audio, audio synths, audio filters, video effects, video synths, 3D graphics, interfacing with the physical world (HID, arduino, etc), network programming.

Programming electronic music in pd

by Johannes Kreidler http://www.pd-tutorial.com/

Algorithmic Composition RealTime Environment

PD library collection

github repository pd-acre https://github.com/iem-projects/pd-acre

DATA SONIFICATION TOOLS: SODALIB

SodaLib: a data sonification framework for creative coding environments Agoston Nagy

SodaLib

In the ever-growing area of data driven interfaces (embedded systems, social activities), it becomes more important to have effective methods to analyze complex data sets, observing them from different perspectives, understanding their features and dimensions, accessing, interpreting and mapping them in meaningful ways. With SodaLib, it is easy to map live data (sensors, web apis, etc), large, prerecorded datasets (tables, logs, excel les), or even unusual sources (images, 3D environments) to recognizable audible patterns through a set of sonifcation methods, including parameter mapping and event based sonification.

SodaLib is an open source, cross platform, multipurpose sonification tool for designers, programmers and creative practitioners.

Source: github.com/stc/ofxSodaLib

http://www.binaura.net/stc/wrx/text/Sodalib-Paper-PdCon16.pdf

Algorithmic composer: Sonification

http://www.algorithmiccomposer.com/2016/01/sonification-algorithmic-composition_12.html

Sonification can be used to hear information in a set of data that might be otherwise difficult to perceive, common examples include Geiger counters, sonar and medical monitoring [ECG]. When creating sonification algorithmic compositions we are interested in creating interesting or aesthetically pleasing sounds and music by mapping non-musical data directly to musical parameters.

Algorithmic CompOSITION // CATARINA

by Mike Moreno

Youtube Video: https://youtu.be/4ypHU6HKekM
Get the recording here:

A 100% synthesized algorithmic composition made using Pure Data Vanilla 0.49-1. Made for the upcoming Muff Wiggler Discord Collective Album.

Instruments and sounds used are part of my library: pd-mkmr
https://github.com/MikeMorenoAudio/pd-mkmr

Facebook: https://facebook.com/MikeMorenoAudio/
Website: https://mikemorenoaudio.github.io/
GitHub: https://github.com/MikeMorenoAudio
PatchStorage: https://patchstorage.com/author/mianmogra/

Algorithmic Composition // JOI

by Mike Moreno

puredata patch joi

Youtube Video: https://youtu.be/rhuRx7uQCbc
Get the recording here: https://mikemoreno.bandcamp.com/track/just-intonation-algorithmic-composition-in-pure-data-joi

An algorithmic composition made using Pure Data Vanilla 0.47-1

The Visuals were made in GEM using primarily the [scopeXYZ~] and [pix_sig2pix~] objects.

I also relied on heavylib (a library of vanilla abstractions):
https://github.com/enzienaudio/heavylib

Facebook: https://fb.com/MikeMorenoAudio/
Website: https://mikemorenoaudio.github.io/
GitHub: https://github.com/MikeMorenoAudio
PatchStorage: https://patchstorage.com/author/mianmogra/

puredata patch from the algorithmic composer http://www.algorithmiccomposer.com/2016/01/sonification-algorithmic-composition_12.html