Machine Creativity

Graphic Score Examples

Bridge Music Notation

Anthem Score music notation transcribed from bridge sound recordings

AI Generated Bridge Lyrics

Created by

A vision of fate has led astray

Burning the bridge at dusk


An ivory bridge

For love, for art, for peace, for words

For dying, for starting

A concrete bridge over streams

And the bridge is down
A golden bridge
Every bridge that was burned and every bridge that was smashed
Every train that came to town and every plane that hit the ground
On the bridge, you pay our toll
The streets are full of memories

Oh we danced on the edge of a bridge

Oh you left me this way

I dance on the opposite shore

I will be a bridge, be a bridge

So build a bridge to the sun, nobody else to know
Plant the wind beneath the earth, wait to destroy the flood
Wash the sin from our skin, build a bridge high and high
They tell me about the bridge
Cause if you build a bridge between you and I
Just imagine where it will end up

I need to feel a spark inside my soul

I need to rebuild a bridge

You and me we can bridge the vision

bridge the gap

Maybe we can bridge the vision

bridge the vision gap

My fallen angel, my burning bridge

Go on go far away from here

Like the sacred

On our sacred bridge

I had to break my bike in half

I had to crash my car into a bridge

Yeah, you got me walking on the edge of a bridge

Your gonna smile on me, sweet love

We hung fifteen balls on the bridge of the Brooklyn Bridge
Where are you, girl, where are you?
I was on the bridge, on the bridge
Yeah, I was on the bridge when they came and took the bridge
We break our bodies
On this wooden bridge
Build me a bridge or maybe two
If I do you can mend my heart
Danger, darling you make me violent, crazy, I jump off the bridge to tell you
Kinda like the moment you took everything I had
I was on the bridge, on the bridge, on the bridge
Yeah, I was on the bridge when they came and took the bridge
We break our bodies
On this wooden bridge

Generated using


MuseNet – OpenAI Try MuseNet. We’re excited to see how musicians and non-musicians alike will use MuseNet to create new compositions! In simple mode (shown by default), you’ll hear random uncurated samples that we’ve pre-generated. Choose a composer or style, an optional start of a famous piece, and start


magenta studio

Magenta Studio – Standalone Continue. Continue uses the predictive power of recurrent neural networks (RNN) to generate notes that are likely to follow your drum beat or melody. Give it an input file and it can extend it by up to 32 measures. This can be helpful for adding variation to a drum beat or creating new material for a melodic

magenta neural synth

Making a Neural Synthesizer

magenta tutorial

Making music with magenta.js Step 1: Making sounds with your browser. Everything in @magenta/music is centered around NoteSequences. This is an abstract representation of a series of notes, each with different pitches, instruments and strike velocities, much like MIDI. For example, this is a NoteSequence that represents “Twinkle Twinkle Little Star”. Try changing the pitches to see how the sound changes!

Dan Jeffries The musician in the machine

The Musician in the Machine In this article, we’ll look at how we did it. Along the way we’ll listen to some more samples we really loved. Of course, some samples came out great, while some didn’t work as well as we hoped, but overall the project worked

getting started with magenta

Getting Started – Magenta Getting started. Ready to play with Magenta? This page will help you get started making music and art with machine learning, and give you some resources if you want to explore on your own!

magenta long form generative compositions with transformer

Music Transformer: Generating Music with Long-Term Structure Update (9/16/19): Play with Music Transformer in an interactive colab! Generating long pieces of music is a challenging problem, as music contains structure at multiple timescales, from milisecond timings to motifs to phrases to repetition of entire


openai Jukebox tutorials

for non-engineers

A simple OpenAI Jukebox tutorial for non-engineers II. Overview and limitations . OpenAI uses a supercomputer to train their models and maybe to generate the songs too, and well, unless you also have a supercomputer or at least a very sweet GPU setup, your creativity will be a bit limited.. When I started playing with Jukebox, I wanted to created 3-minute songs from scratch, which turned out to be more than Google Colab (even with the pro …


Show notebooks in Drive – Google ColaboratoryColab notebooks allow you to combine executable code and rich text in a single document, along with images, HTML, LaTeX and more. When you create your own Colab notebooks, they are stored in your Google Drive account. You can easily share your Colab notebooks with co-workers or friends, allowing them to comment on your notebooks or even edit

colab interacting with jukebox

Show notebooks in Drive – Google Colaboratory Please note: this next upsampling step will take several hours. At the free tier, Google CoLab lets you run for 12 hours. As the upsampling is completed, samples will appear in the Files tab (you can access this at the left of the CoLab), under “samples” (or whatever is currently)


MIMIC is a web platform for the artistic exploration of musical machine learning and machine listening. We have designed this collaborative platform as an interactive online coding environment, engineered to bring new technologies in AI and signal processing to artists, composers, musicians and performers all over the world.

The MIMIC platform has a built-in audio engine, machine learning and machine listening tools that makes it easy for creative coders to get started using these techniques in their own artistic projects. The platform also includes various examples of how to integrate external machine learning systems for sound, music and art making. These examples can be forked and further developed by the users of the platform.

Over the next three years, we aim to integrate brand new and developing creative systems into this platform so that they can be more easily used by musicians and artists in the creation of entirely new music, sound, and media, enabling people to understand and apply new computational techniques such as Machine Learning in their own creative work.

MIMIC or “Musically Intelligent Machines Interacting Creatively” is a three year AHRC-funded project, run by teams at Goldsmiths College, Durham University and the University of Sussex.

MIMIC Creative AI Platform MIMIC is a web platform for the artistic exploration of musical machine learning, machine listening and creative


Intelligent Instruments: a funded ERC project – Sonic Writing The European Research Council has awarded me an ERC Consolidator grant for the project Intelligent Instruments: Understanding 21st-Century AI Through Creative Music Technologies.The five-year, 2 million Euro research project will consist of a team of postdocs, doctoral researchers and an instrument designer from the fields of music, computer science and philosophy.


MUBERT© 2016 – 2020, Mubert Inc.
All music broadcasted on * domains is generated (created, composed, recorded) by Artificial Intelligence (algorightm, software, program) owned by Mubert® Inc and licensed by Mubert® Inc only for personal use. All rights are reserved. Public reproduction, recording, distribution of this music is prohibited.

How does it work?

Mubert’s unique algorithm creates and streams electronic generative music in real time, based on the samples from our extensive database. Every day new samples are added to the stream to support endless and seamless flow of one-of-a-kind work music.

About Mubert

Mubert is an AI music solution for any business, platform & use case. Mubert delivers worldwide copyright-protected AI-generated music via API. Infinite customization, cost-efficiency & legal compliance can help businesses fix key music industry pain points.
All music is royalty free & cleared for any cases both for personal & commercial usage.Pricing .01c per minute or $299 per month startups / $1,000 per month large business

To facilitate their ability to connect with audiences and make a positive global impact, Mubert is launching a new extension that allows users to play unlimited streams of AI-powered music in their shows without any risks of DMCA takedowns and other copyright issues.  

Subscribe with Music for Live Streams to feel your background on YouTube, Twitch, Facebook & 30 other popular services with Chill, Ambient, Trance, and other high-quality music curated by Mubert. music stations for live streams. Compatible with Youtube, Facebook, Twitch & other streaming services.
Premium $4.99 month

License Agreement


Future learn machine learning
amper music compose

AI Composition with Real Instruments | Amper Music Amper’s music is created from scratch using millions of individual samples and thousands of purpose built instruments. You won’t find our proprietary samples anywhere

amper and ai music

Amper Music: ‘AI’s not going to replace your job – it’s going to change your job’Does this mean Amper is going to use part of that new funding round to build more tools for artists to create with AI? Silverstein is careful not to commit to anything in the near future, saying that as a small and growing company, Amper has to be “very judicious” in how it allocates resources for

how ai generated music is changing the way hits are made

How AI-generated music is changing the way hits are made Music-making AI software has advanced so far in the past few years that it’s no longer a frightening novelty; it’s a viable tool. For the second episode of The Future of Music, I went to LA to visit the offices of AI platform Amper Music and the home of Taryn Southern, a pop artist who is working with Amper and other AI platforms to co …

amper founder

How technology can democratize music From the printing press to the digital camera, innovation has often democratized the creative arts. In this forward-looking talk, music producer Drew Silverstein demos a new software that allows anyone to create professional-grade music without putting human musicians out of

amper faq
enterprise Ai

A.I. Songwriting Has Arrived. Don’t Panic Welcome to the next great debate about the legitimacy of

Ampios Future

A computer-generated soundtrack for your day

Amper about


How does licensing work for Score’s music?

Track licenses are available for purchase at a few different tiers based on the intended usage. All licenses (regardless of tier) are royalty-free, permit global distribution of content, and are valid in perpetuity.

  • Personal License — $29 (This tier is meant for your personal or educational project needs. The licensing does not cover ad spend or promotions. For example, a video made as a hobby.)
  • Enterprise Basic License — $74 (This tier is meant for internal or external professional projects and cannot be supported with an ad spend. For example, an internal training video that will be shared within your company only or public tutorial content for latest feature release.)
  • Branded Content License — $399 (This tier is meant for professional projects that will be posted on your own social channel or website and can be supported with an ad spend. For example, a YouTube video on your channel.)
  • Online Ad License — $1,199 (This tier is meant for professional projects that can be both used in ads and supported with an ad spend. For example, a video that will run as a YouTube pre-roll or Instagram ad.)
  • All Media/Multimedia — Request a quote (This tier includes a combination of the above plus additional licensing needs. Please contact us so that we can evaluate your use-case and provide a quote.)

What does Amper do?

Amper is an AI music company. We develop enterprise products to help people make music using our AI Composer technology. Today we offer two products—our music creation platform Score, and an API that allows companies to integrate our music composition capabilities into their own tools.
What is Score?

Score is a tool for content creators to quickly make music to accompany videos, podcasts, games, and other types of content using our AI Composer. Score is designed to significantly reduce the time it takes to source music and adapt it to fit a particular project.
Who is Score intended for?

Score was built for businesses who create a lot of content and are looking for ways to source high quality music more efficiently. Video editors, podcast producers, and video game designers can all benefit from Score’s capabilities.
How is Score different from stock music sites?

Each track Score outputs is composed by our AI in real-time and is always custom to your project. Collaborating with Score allows you to tailor a broad variety of your track’s musical attributes, including length, structure, genre, mood, instrumentation, and tempo.

Additionally, all the sounds you hear in Score are samples of real instruments recorded at Amper’s Los Angeles studio. Unlike stock music, which is often made using widely available sample “packs”, Score’s sounds are proprietary. This makes Amper’s music truly unique.


The Best Free Live Streaming Software on Windows and Mac | StreamlabsThe most popular streaming platform for Twitch, YouTube and Facebook. Cloud-based and used by 70% of Twitch. Grow with Streamlabs Open Broadcast Software (OBS), alerts, 1000+ overlays, analytics, chatbot, tipping, merch and

Music Ally is a knowledge company for the global music business

Music Ally Is A Knowledge Company NEW! Learn. Music Ally has launched a brand new Learning Hub for the music industry, with more than 30 modules of certified video content at launch, combined with relevant supporting materials from the rest of Music Ally’s information and

Every tab open on my screen in the quest to figure out how to write generative music… Some very useful tutorials (including an ableton live project with all the settings – just need to upload bridge samples) and one slightly concerning automatic music generator which seems to be based on avoiding paying copyright to artists. So not down with that, but curious about what it does. Also the station of endless compositions which I find quite soothing, some random plugins and synths.

Starting with a few examples from the beautiful composition by Loscil based on ghist ships, to an excellent visual presentation taking you through the genesis of generative music, a few nice images, uiges and software possibilities to explore.



This presentation is about making music
by designing systems that make music.

For his 1965 composition “It’s Gonna Rain”, Steve Reich designed a generative mechanical system.

It was a system of two tape recorders.

Reich had made field recordings of speech on tape.

He made two short identical loops from his recordings. One for each tape recorder.

The tape recorders were set to slightly different speeds.
One would play a little faster than the other.

This started a phasing process. A continuously changing soundscape was generated from the short loops.

Though I may have the pleasure of discovering musical processes and composing the musical material to run through them, once the process is set up and loaded it runs by itself.

Steve Reich, Music as a Gradual Process, 1968.

On And On And On: A Guide to Generative Electronic Music

By Samuel Tornow · July 19, 2019



Nodal is generative software for composing music, interactive real-time improvisation, and a musical tool for experimentation and play. Nodal uses a new method for creating and exploring musical patterns, probably unlike anything you’ve used before. You can play sounds using Nodal’s built-in synthesiser or any MIDI compatible hardware or software instrument.

Nodal is based around the concept of a user-defined network. The network consists of nodes (musical events) and edges (connections between events). You interactively define the network, which is then automatically traversed by any number of virtual players. Players play their instruments according to the notes specified in each node. The time taken to travel from one node to another is based on the length of the edges that connect the nodes. Nodal allows you to create complex, changing sequences using just a few simple elements. Its unique visual representation allows you to edit and interact with the music generating system as the composition plays.

Nodal is compatible with DAW software such as Ableton Live, Logic Studio, Digital Performer and Garage Band. It can transmit and receive MIDI sync. You can edit a network using the mouse and keyboard, and optionally a MIDI keyboard. Nodal recognises and sends notes, sync, continuous controller and pitch bend information.

Nodal is developed at SensiLab, Monash University.

Melda Production VST Convolution

Modular Synth Sequencer

Generative Modular Music

How to Create Amazing Generative Music using Ableton Live and Max Devices – Tutorial

Fantastic Devices and how to use them.. to create surprising generative jams with hardware or soft synths, inside Ableton Live. I can just listen to this for hours! And it can be your best solution if you need to create a lot of musical content in a short span of time. The inspiration for this Jam/Tutorial came when i started using the Modular Sequencer, by Sound Manufacture. I find it a smart and useful device, as it easily allows to do things that would require tons of other devices, and a lot of time to make them work in Ableton Live. Plus it has its own unique features, a lot of them! And it can also communicate with other Sound Manufacture devices, to considerably expand the range of options. Here you can find the devices shown in this video. Modular Sequencer:…


Generative Music Techniques With Ableton Live

Here is the Template Project download link, you’ll find the project in the zip file contained in the Album download. Just set price as ZERO and download for free

As shown in the video, i prepared a streamlined version of my jam, with just stock and free devices. In this Ableton Template Project folder you can find: – An Ableton Live 10 Project containing everything – Instrument Racks with Midi effects to generate random notes and chords – A Drum Rack with Midi effects to generate random rhythms, plus a kit of custom analog drum samples made by me, using my analog synths – All the tracks used in my jams, featuring all the modulation i created, but without the third party devices and effects.


Generating Random Music In Ableton Live

Steve Angstrom: Generative Music Tutorial

Generative music in Live

Here is a brief tutorial on making generative music with Live. This tutorial first appeared on the Ableton Live Forum, in 2007 – so the examples were made using Live 6.07  and will work with any version of live later than that.

What is generative music?

Generative music is where you provide logical or programmatic ‘seeds’ and the computer grows you some music based on the parameters you set out, Brian Eno is probably the most famous practitioner.

Why make generative music, or get Live to make IT for you?

Generative music is a different beast from making a track of your own, it is more like planting a garden. In fact a generative piece is like a glorified wind chime, so we could equally ask ourselves “why do people have wind chimes rather than stand in the garden hitting aluminium themselves?”  The answer would be the same – the sounds which result may not be “music” but they can be good background noise and in that way quite beautiful and surprisingly interesting as ambience, furthermore the underlying generation can be tinkered with to deliver a wide range of what appears to be expression. A generative piece will sustain a low level of interest for hours!

Live is quite a good environment for creating generative music and I have two methods to do so, an audio based method and a midi method.

I will focus on the more midi-oriented method here.
There are limitations to how far you can go with Live and generative music, but what you can achieve is entertaining.

How it is achieved in Live

To make generative music we need to make Live play or do something whenever a condition is met, we get flexibility by giving the program some freedom. Instead of saying “EVERY time a bar starts play a C minor chord”, we want variation. An example might be “Sometimes play a chord (from a selection of chords) on either the first or third beat, and if you do then perhaps play one of these ralated chords after it, or perhaps think about playing this tune instead”

So now we have a random event which is constrained by a limited set of outcomes, it sounds passably like music.

Cascading The Variations

I set a ‘condition’ with two Velocity plugins, and how we can set two different outcomes using ‘if – else’. Now imagine dividing the random values up into many zones, this way you can create little themic areas.
You can start to go further down the fractal tree, each conditional zone can have a new random value generated to make new notes for itself. Each ‘conditional zone’ can be a different part of your song. The ‘riff’ , the ‘bassline’ , the ‘chords’ . Each of them can watch a zone and do some more complicated ‘riff’ or ‘chord’ related actions anytime that rack is triggered by the main condition.



The video takes inspiration from Brian Eno‘s concept of Generative Music. Eno has been creating systems for generating music since the 70’s.

While he initially applied this approach to ambient music, on albums like Music For Airports, his later work has explored creating systems for generating other types of music, too.

This video looks at exploring this concept, using a variety of hardware and software systems, ranging from iOS apps to desktop DAWs and modular synthesizers.

By Synthhead


Play eternally evolving ambient music on

How to Make Generative Music in the Browser

My personal process by Alex Bainter on MEDIUM

After making generative music systems on for the better part of a year now, I’ve received numerous requests for an explanation of how I create the systems featured on the site. Rather than replying to everyone individually, I thought it would be best to explain my process here for anyone interested.

Web Audio API

The Web Audio API is a relatively new browser API which is well supported. It enables web developers to play, synthesize, control, process, and record audio in the browser. As you can probably guess, this is the linchpin technology I use to create browser-based generative music systems. Boris Smus wrote an excellent, short book on the subject titled Web Audio API: Advanced Sound for Games and Interactive Apps which I recommend.


Tone.js is a JavaScript framework for using the Web Audio API. It offers an abstract layer on top of the Web Audio API which should be familiar to musicians and producers, with a vast array of synthesizers, effects, and filters, as well as related utility functions for things like converting scientifically notated pitches to their frequencies and back. Additionally, it greatly simplifies access to an accurate timing system. While this library is not strictly necessary for making generative music systems in the browser, I’ve never built one without it. It’s very rare that I find myself interacting directly with the Web Audio API rather than using this fantastic library.

I highly recommend “JavaScript Systems Music” by Tero Parviainen as an introduction to creating music in the browser with Tone.js and the Web Audio API.


It’s certainly possible to synthesize sounds with Tone.js and the Web Audio API, but it’s not something I’ve explored much (read: I suck at it). Instead, I prefer to use recorded audio samples which I play and manipulate.

There are plenty of libraries full of free or cheap audio samples out there, but the most significant ones I’ve used at the time of writing are the Community Edition of Versilian Studios Chamber Orchestra 2, the Versilian Community Sample Library, and the Sonatina Symphonic Orchestra. The generosity of the providers of these and other free libraries inspires me to release my work for free as well.

In addition to using sample libraries, sometimes I record my own audio samples for use on the site. I record with a Rhode NT1-A microphone or direct from my Line 6 POD HD500X into a Focusrite Scarlett 2i4. This is all relatively cheap gear which I purchased used. Occasionally when I record, I reconstruct my “recording booth” which I designed and made out of PVC pipe and movers’ blankets to dampen sound. Though, I usually can’t be bothered.


tonal is another JavaScript library providing utility functions related to music theory. While not every piece requires this library, it’s invaluable for the ones that do. The library contains all sorts of helpful functions which do things like returning all the notes or intervals in a given chord or scale, inverting chords, transposing notes and intervals up or down a given amount of semitones, and so much more.

Creative Commons License

For anyone and everyone

Music from is officially licensed under a Creative Commons Attribution 4.0 International License (CC BY 4.0). Generally, you are welcome to use or modify the music for any purpose, including commercially or publicly, provided you give appropriate credit.

It’s as easy as adding something like this to your work:

Or even just:

See “How to give attribution” from Creative Commons. Please pretend to read the official licensing terms before using music from this service.

You can show your appreciation by making a donation.

Other Licensing Arrangements

For special cases and people with fancy pants

If you prefer not to use the Creative Commons licensing, send an email with your offer to




Okay, let’s get this out of the way – no one should describe this as an “AI DJ.” There is no autonomous machine intelligence acting as a DJ. On the contrary, the mushy digital mash-up textures on offer here are unique, fresh, and distinctively sound like something that came from Moisés. Part analysis, part generative artwork, part creative remix, OÍR is simultaneously the intentional work of an artist and a machine reflection of a wide variety of streamed DJ sets.

Technically speaking, says Moisés, “the system is a compendium of OpenAI’s Jukebox, trained from scratch, StyleGAN2 for visuals.” “The mixing and DJ ‘transitions’ are done with a MIR [Music InformatioN Retrieval] ‘automatic mixing’ Python script,” he says.

But it’s worthwhile also understanding his artistic intention:

OÍR stems from my ongoing research on AI, sound synthesis, and electronic music.

Since starting my adventure into Deep Learning systems for music a couple of years ago, I’ve asked myself if Deep Learning (AI) is a tool or a medium?
Right now I’ve come to the conclusion that it can be both, and this is what
exactly I’m trying to explore with this project.

When we talk about Deep Learning as medium, there are three particular
processes engaged when working with generative systems: curation of the data, training and monitoring of the algorithm as it ‘learns,’ and generating new synthetic media. Rinse and repeat.

There are a couple of aspects that interest me from this process. Each time you train the AI algorithm, its weights and biases or what it has ‘learned’ change over time — depending on the data you are having it learn from. The algorithm generates patterns present in these vast amounts of images and music, as is the case of OÍR, and these can be changing as the ‘learning’ process continues.

So this quality of a constantly changing and morphing generative algorithm is exactly what I want to explore with OÍR, and what better way to do it than though electronic dance music and techno culture.

I chose a channel as the canvas for the first episode, or EPOCH, of OÍR with a selection from the archive from HÖR Berlin, because I feel this channel has done the amazing job of generating a collective culture, specifically within techno and electronic music. I wanted to explore which patterns are emerging from this culture – which patterns can be synthesized, both visual and sonic, from all these sets and different approximations of techno, over 1,400+ hours and counting.

My desire with this art project is not to automatize or replace DJ’s or
electronic musicians in any way, but rather have OÍR be a sort of ‘live generative archive’, as I did before with my album 𝕺𝖐𝖆𝖈𝖍𝖎𝖍𝖚𝖆𝖑𝖎 in relation to the Mexican 3ball electronic music genre, of certain cultural moments in electronic music which are increasingly existing on big tech platforms and the internet. By the way, OÍR means “to listen” in Spanish.



These are some of the resources I will be exploring in the course of my research:


AI Music, Algorithmic Composition, Data Sonification, Machine Creativity

  • Tutorials, guides and how-to courses
  • Examples of Algorithmic Composition
  • Projects using Machine learning + creativity / AI Music



Make music and art with machine intelligence

MIMIC is a web platform for the artistic exploration of musical machine learning and machine listening. We have designed this collaborative platform as an interactive online coding environment, engineered to bring new technologies in AI and signal processing to artists, composers, musicians and performers all over the world.

The MIMIC platform has a built-in audio engine, machine learning and machine listening tools that makes it easy for creative coders to get started using these techniques in their own artistic projects. The platform also includes various examples of how to integrate external machine learning systems for sound, music and art making. These examples can be forked and further developed by the users of the platform.

puredata tutorials

Pd Tutorials and HOWTOs?

Here is collection of howtos and tutorials in many different languages covering a number of different topics. The following topics has been suggested to merge in to the below list: basic audio, audio synths, audio filters, video effects, video synths, 3D graphics, interfacing with the physical world (HID, arduino, etc), network programming.

Programming electronic music in pd

by Johannes Kreidler

Algorithmic Composition RealTime Environment

PD library collection

github repository pd-acre


SodaLib: a data sonification framework for creative coding environments Agoston Nagy


In the ever-growing area of data driven interfaces (embedded systems, social activities), it becomes more important to have effective methods to analyze complex data sets, observing them from different perspectives, understanding their features and dimensions, accessing, interpreting and mapping them in meaningful ways. With SodaLib, it is easy to map live data (sensors, web apis, etc), large, prerecorded datasets (tables, logs, excel les), or even unusual sources (images, 3D environments) to recognizable audible patterns through a set of sonifcation methods, including parameter mapping and event based sonification.

SodaLib is an open source, cross platform, multipurpose sonification tool for designers, programmers and creative practitioners.


Algorithmic composer: Sonification

Sonification can be used to hear information in a set of data that might be otherwise difficult to perceive, common examples include Geiger counters, sonar and medical monitoring [ECG]. When creating sonification algorithmic compositions we are interested in creating interesting or aesthetically pleasing sounds and music by mapping non-musical data directly to musical parameters.

Algorithmic CompOSITION // CATARINA

by Mike Moreno

Youtube Video:
Get the recording here:

A 100% synthesized algorithmic composition made using Pure Data Vanilla 0.49-1. Made for the upcoming Muff Wiggler Discord Collective Album.

Instruments and sounds used are part of my library: pd-mkmr


Algorithmic Composition // JOI

by Mike Moreno

puredata patch joi

Youtube Video:
Get the recording here:

An algorithmic composition made using Pure Data Vanilla 0.47-1

The Visuals were made in GEM using primarily the [scopeXYZ~] and [pix_sig2pix~] objects.

I also relied on heavylib (a library of vanilla abstractions):


puredata patch from the algorithmic composer