I’m digging through the archives as we speak, but there’s not a ton and I wonder if anyone knows of solid tools for my particular use case.
We’re going to be grabbing some time-series data, and I’m not yet sure what the range will be, but here’s my basic plan:
- Scale the range of data to a pleasant range of frequencies.
- Create some way of defining chords that corresponds to notes over that entire scale (could be done programmatically, using a root note and frequency ratios). Maybe an array?
- Quantize each data point to the nearest frequency in the list for the current chord.
- Change chords over time so that it’s musical
I’m hoping to end up with basically an arpeggiator controlled by these data points. It would be approximate – the same data point could yield different notes, depending on the current chord – but you’d get the general sense of the numbers as they fluctuate over time.
Any hints? Libraries, code examples? I’m pretty new to Sonic Pi, so whatever you can give me, I’ll gobble up! Thank you!
sounds like a fun project
First I should say that Sonic Pi wasn’t specifically created for this kind of use case. However, it should definitely be possible. The reason why I’m saying this now is that for the use cases that Sonic Pi was created for (teaching intro CS + Music, live performance, composition) I’m confident that the majority of things are both well documented and sufficiently simple for an average 10 year old to pick up.
Neither of these things, unfortunately, will be completely the case with a project like this. As you have found, there’s not lots of documentation and, in fact, you’ll see that we’ll actually have to veer off the “happy path” and use undocumented (read unsupported) functionality which will require more CS than is typically required for standard Sonic Pi user.
This might all sound a bit scary, but really everything I’m saying would be exactly the same for any standard programming language - it’s just that Sonic Pi tries to be more of an instrument than a programming language
In essence you’ll have to lean heavily on Ruby which is the language Sonic Pi is implemented on top of.
First up, a quick question: What format is your data in? Is it in a file such as CSV, in a database, is it coming in live from a web API?
I think, once we have your data in a nice simple array we should be up and running and you’ll find it’s a breeze to iterate through the data, munge it a bit, and use the values to both define when things happen and what happens. That’s the kind of thing that Sonic Pi makes really easy
Sweeet, thanks for the insight!
This is exactly what I needed – I’m comparing options of Sonic Pi vs Arduino (Teensy), Pure Data, Python, etc.
We can probably get the data into any format (I’m on a hackathon team), so that gives some flexibility. I’ll report back as we progress!
Cool, something like a CSV file would be super easy to work with
Probably the workflow I’d wind up with is as follows:
- Read in the data into an internal Array. I’d probably use an array of associative maps with a key such as
:timestamp and then other appropriate keys for the data values you want to work with.
- Map over the array to create a new array with modified values (this is where you’d munge the values where necessary - unless this munging has happened prior to storing the values as a CSV file or whatever you wind up with)
- Iterate over the (new) array and for each value do something - this might be to
sleep for some amount of time with respect to the
:timestamp and/or it might be to play a note, etc.
Doing things like looking up notes of a scale with an index is pretty simple:
# Get the 4th element from an E minor pentatonic scale
(scale :e3, :minor_pentatonic, num_octaves: 3)
Changing the current chord could be as simple as having the root of the scale be read from Sonic Pi’s timestate, which can be modified either whilst iterating through the notes, say every nth value, or via a separate thread running in a
live_loop. Reading from Time State is done via the
get function such as:
# Read root note from Time State and store in a local variable
root = get[:root_note]
(scale root, :minor_pentatonic, num_octaves: 3)
Elsewhere you can call
set :root_note, :e4 to set it. Note that using
set is completely thread safe and deterministic, so you’ll always get the same musical output regardless of race conditions
With respect to other options:
- I’m pretty confident that anything you can do in Python you’ll be able to do in Sonic Pi - only Sonic Pi will also give you the added bonus of having very easy access to well-timed, thread-safe sound generation.
- The Teensy is amazing hardware, but you’re likely to have to talk serial to get the data to it and either work with someone elses synth code or design your own.
- Pd is also amazing but is pretty low level (it’s kind of like a code version of a modular synth) and I dont’ think it has many particularly high-level constructs for easily working with external data (although I’m sure it must be possible).
Whatever you end up with, I’d love to hear what you do and whether or not Sonic Pi helped in any way. All feedback helps inform how Sonic Pi will continue to evolve in the future.
w00t! I’ve decided to stick with Sonic Pi, and am dropping bits of code into this gist: https://gist.github.com/alexglow/f2d3c74731f1f608d29b71cee3db79cd
I’m new to Ruby, but it’s very friendly so far! Right now, I’ve managed to get arpeggiation going with a chord progression, which is the core of what I was after. I just saw your suggestions, and those could definitely help make it cleaner and simpler
Then, it’s just a matter of figuring out the CSV part (once my compatriots have their data)… if anyone has pointers on pulling in data from a file, that would be amazing!
Super - we can definitely help you along the way as much as you’d like
Great that you’re finding Ruby friendly. Personally I see it as a slightly more friendly version of Python. It certainly has been easier to teach to beginners due to the more forgiving syntax.
Whilst you’re waiting for your compatriots to throw the data at you, it might be possible to get them to give you a super simple data mock-up consisting of just a few made-up data points. We can then work with that and then hopefully things will “Just Work” when the real data lands…
Thanks so much, Sam!
Here’s our final result from the hackathon: https://github.com/asanspace/planet-hack-2020/tree/master/sonic
And the final Sonic Pi script: https://github.com/asanspace/planet-hack-2020/blob/master/sonic/sonicpiscript
We took cloud-cover data from a series of photos, and turned it into a pretty arpeggio over a chord progression. Higher pitch = cloudier skies.
With a series of 128 photos over Egypt, you get this: https://github.com/asanspace/planet-hack-2020/raw/master/sonic/planet-sounds-128.wav
And a longer one, over San Francisco: https://github.com/asanspace/planet-hack-2020/raw/master/sonic/sf_hax_music.wav
This is ace! Will you be making any vids about this project?
Thanks so much for persevering with Sonic Pi - sorry it was a bit bumpier than I would have liked, but really glad you managed to make something with it.
I hope you decide to stick around here for a while
It really wasn’t all that bumpy! And I’ll absolutely make a video, once I’ve got it onto some actual Pi hardware Working on getting the audio out to a USB speaker now.
In the meantime, I’ve published a write-up, and started a repo to drop in some useful (for me) little scripts and patches: