I’ve been messing around with Sonic Pi for quite a while now and was trying to enumerate major missing pieces (from my perspective) to be able to comfortably use Sonic Pi for electronic music production. WDYT, do any of these strike a chord?
Quality of Life
There are some improvements to the core of how Sonic Pi works, that would make Sonic Pi easier to understand and work within the domain of coding.
Live Loop Rationalisation
I have tinkered a lot with the way I set up my live loops in order to work around several constraints:
- Order matters - if my “click track” is the first live loop then the first cue it sends will go out before the other live_loops are cued to it so the first beat will be missed. This only really matters when restarting from a full stop but it’s awkward for getting the timing right. I have an extra 1s sleep only for the first iteration in my click tracks.
- Interactive updates - if my loop starts with a sync (the obvious place to put it) but does not run for exactly the full length of the bar (eg: using at to trigger events) then the loop will finish before the sound has finished and will be waiting for the next cue. In this state re-running the code to update the loop will delay the update by the whole length of the bar, which is awkward when trying to live code
- Sleep-based loops require that your loop may not have sleeps exactly the amount of a bar, otherwise they will exactly miss the cue from your click track and effectively trigger every other bar only. I’ve entirely stopped using sleep-based live_loops for this reason. But sleep-based approach is more understandable so it would be great to have it work nicely.
I do have my favourite local setup I use for this, but the correct thing to do would be to update live_loop (and sync/cue) based on these practical realizations to make it behave as expected in all scenarios that normally come up. This might be a breaking change, but that ought to be something we can do.
Global State
For communicating global state in game time we have get/set, but they only allow storing scalar types, not objects (eg: instruments, fx, custom classes). Sometimes it can be desirable to maintain some global state about your composition beyond these static scalar values. For example, when doing recording and looping of buffers, one needs to track which buffers are currently recording or should be recorded after the next sync event.
DAW Equivalence
There are features electronic music producers expect from their work environment and many of them would also make sense for a programmable music production environment.
Audio Buses
Like sound_in/sound_out - named internal audio channels to make it straightforward to send multiple instruments for the same fx but also separate out a single instrument to be handled by parallel fx stacks.
Also enables side channel compression which is a cornerstone of modern electronic music and the obvious feature of being able to control levels centrally (solo/mute).
Control Buses
Designate set/get values that will automatically be applied as control parameters to sound/fx when adjusted. Automating the “automation tracks” from the DAW without having to open-code ad-hoc live loops to spam get/control would be nice!
Real Time Warp
Allow quickly moving time forward globally without regard to real time passage. It is very common to want to tweak a particular beat or a bar in your composition and there is no good way to speed up the triggering of it as-is.
Exposing a sliver of code from deep within a live loop to be triggered directly can be difficult and waiting for the moment of interest to trigger every 10s is wasting time.
This need no be fancy - it would already be amazing to just be able to move forward time at CPU speed while omitting all actuation (midi/osc/synth) - although for this feature to work best it would have to do some tracking to restart audio cues at the right times (play a sample from where it would have left off? Not as easy for externals synths and devices!)
Symbolic Visualization
Sonic Pi does have a tiny wave display to see the audio signal - but it offers little help with understanding the symbolic timeline of your composition. The best it can do here is really the log and that will get very busy very quickly and it isn’t great for expressing note information or rythmic information in the first place.
Sonic Pi ought to have a symbolic display for events such as notes played expressed on a keyboard. And for understanding rythmic content, this should also be displayed on a timeline.
Serum Equivalence
Cool modern electronic music is more often than not built on top of cool sounds and creating these cool sounds is a crucial element of electronic music production. There are a couple of things that could up the Sonic Pi sound design game.
Instrument Abstraction
Provide an official way to combine multiple synths, fxs, mini/osc events that have triggered sound to be treated as a single playable “instrument” that can be controlled with control function.
LFOs
In additon to existing general _slide
parameters, add _lfo
parameters that can be used to apply basic envelopes to any changeable parameter.
Decent Piano Synth
This is not “Serum”, but any music production environment needs a decent piano and Sonic Pi currently doesn’t really have one.
Wavetables
Basic audio signal types are the foundation, but a modern “organic” sound is hard to achieve layering sines, saws and pulses - plus it’s inefficient. Sonic Pi should have a synth to play wavetables and primitives to extract wavetables from samples and buffers.
Patches
Virtually all synths come with interesting named collections of settings to get the composer started with some cool sounds and to serve as starting points for future exploration. Soni Pi should also have a function to instantiate a patch and a collection to draw from - one or more synths and/or fx to produce a sound. For a single synth this would be pretty trivial (just a hash of all the relevant parameters) and the potential multi-synth direction depends on us having some kind of effective instrument abstraction to represent that.