Evolvable Sound

I posted previously that I’m working on “evolving” sound using a genetic algorithm and Sonic Pi . I now have something to share. Though I still consider the project to be a work in progress, I was able to put something together for a show that happened a couple weeks ago in St. Louis called O.N.G.2. It was basically a night of experimental art, tech, and music.

Participants at the show evolved 32 generations of sounds by listening to over 150 individual sounds. The top sounds of each generation were combined to form this composition:

Here’s what the project looked like: https://artifaq.io/artwork/evolvable-sound

I had it running on a Raspberry Pi. Here’s the code.


Here’s the Sonic Pi code that was generated to form the composition. Probably not the most elegant code you’ve seen… Today I’m going to work on passing through more optional args to the samples.


This is pretty cool and I was also thinking about this from a Bayesian perspective: usually we want some variability but it’s not always easy to specify it. What if we could provide some “prior” on the variability and provide some feedback to “condition” it? I think it could be implemented as a sort of feedback loop through OSC, so the iterations could even happen fairly quickly, live

I was thinking of starting with just parameters of a synth but later chords or percussion patterns.

1 Like