Laundry list of feature requests for electronic music production

I’ve been messing around with Sonic Pi for quite a while now and was trying to enumerate major missing pieces (from my perspective) to be able to comfortably use Sonic Pi for electronic music production. WDYT, do any of these strike a chord?

Quality of Life

There are some improvements to the core of how Sonic Pi works, that would make Sonic Pi easier to understand and work within the domain of coding.

Live Loop Rationalisation

I have tinkered a lot with the way I set up my live loops in order to work around several constraints:

  1. Order matters - if my “click track” is the first live loop then the first cue it sends will go out before the other live_loops are cued to it so the first beat will be missed. This only really matters when restarting from a full stop but it’s awkward for getting the timing right. I have an extra 1s sleep only for the first iteration in my click tracks.
  2. Interactive updates - if my loop starts with a sync (the obvious place to put it) but does not run for exactly the full length of the bar (eg: using at to trigger events) then the loop will finish before the sound has finished and will be waiting for the next cue. In this state re-running the code to update the loop will delay the update by the whole length of the bar, which is awkward when trying to live code :slight_smile:
  3. Sleep-based loops require that your loop may not have sleeps exactly the amount of a bar, otherwise they will exactly miss the cue from your click track and effectively trigger every other bar only. I’ve entirely stopped using sleep-based live_loops for this reason. But sleep-based approach is more understandable so it would be great to have it work nicely.

I do have my favourite local setup I use for this, but the correct thing to do would be to update live_loop (and sync/cue) based on these practical realizations to make it behave as expected in all scenarios that normally come up. This might be a breaking change, but that ought to be something we can do.

Global State

For communicating global state in game time we have get/set, but they only allow storing scalar types, not objects (eg: instruments, fx, custom classes). Sometimes it can be desirable to maintain some global state about your composition beyond these static scalar values. For example, when doing recording and looping of buffers, one needs to track which buffers are currently recording or should be recorded after the next sync event.

DAW Equivalence

There are features electronic music producers expect from their work environment and many of them would also make sense for a programmable music production environment.

Audio Buses

Like sound_in/sound_out - named internal audio channels to make it straightforward to send multiple instruments for the same fx but also separate out a single instrument to be handled by parallel fx stacks.

Also enables side channel compression which is a cornerstone of modern electronic music and the obvious feature of being able to control levels centrally (solo/mute).

Control Buses

Designate set/get values that will automatically be applied as control parameters to sound/fx when adjusted. Automating the “automation tracks” from the DAW without having to open-code ad-hoc live loops to spam get/control would be nice!

Real Time Warp

Allow quickly moving time forward globally without regard to real time passage. It is very common to want to tweak a particular beat or a bar in your composition and there is no good way to speed up the triggering of it as-is.

Exposing a sliver of code from deep within a live loop to be triggered directly can be difficult and waiting for the moment of interest to trigger every 10s is wasting time.

This need no be fancy - it would already be amazing to just be able to move forward time at CPU speed while omitting all actuation (midi/osc/synth) - although for this feature to work best it would have to do some tracking to restart audio cues at the right times (play a sample from where it would have left off? Not as easy for externals synths and devices!)

Symbolic Visualization

Sonic Pi does have a tiny wave display to see the audio signal - but it offers little help with understanding the symbolic timeline of your composition. The best it can do here is really the log and that will get very busy very quickly and it isn’t great for expressing note information or rythmic information in the first place.

Sonic Pi ought to have a symbolic display for events such as notes played expressed on a keyboard. And for understanding rythmic content, this should also be displayed on a timeline.

Serum Equivalence

Cool modern electronic music is more often than not built on top of cool sounds and creating these cool sounds is a crucial element of electronic music production. There are a couple of things that could up the Sonic Pi sound design game.

Instrument Abstraction

Provide an official way to combine multiple synths, fxs, mini/osc events that have triggered sound to be treated as a single playable “instrument” that can be controlled with control function.


In additon to existing general _slide parameters, add _lfo parameters that can be used to apply basic envelopes to any changeable parameter.

Decent Piano Synth

This is not “Serum”, but any music production environment needs a decent piano and Sonic Pi currently doesn’t really have one.


Basic audio signal types are the foundation, but a modern “organic” sound is hard to achieve layering sines, saws and pulses - plus it’s inefficient. Sonic Pi should have a synth to play wavetables and primitives to extract wavetables from samples and buffers.


Virtually all synths come with interesting named collections of settings to get the composer started with some cool sounds and to serve as starting points for future exploration. Soni Pi should also have a function to instantiate a patch and a collection to draw from - one or more synths and/or fx to produce a sound. For a single synth this would be pretty trivial (just a hash of all the relevant parameters) and the potential multi-synth direction depends on us having some kind of effective instrument abstraction to represent that.


So why do you still use sonicpi ? When a software doesn’t fill my needs, I use to try another without trying to convince people this software is not very good.
I use sonicpi with children 13-14 years and they have fun. I use sonicpi to have fun myself and it’s OK.
Don’t forget that sonic pi has not the same developpers resources as Daw softwares as ableton, cubase etc

1 Like

Regarding everything under “Serum Equivalence,” it’s a desirable idea, of course, but of much less utility than you might think. I’m a new user, but it looked to me from the start like SPI is about providing nothing beyond a functional collection of soundmaking capacity. For professional-level sounds, you do MIDI out to hardware or plugins—which are designed to provide the best in soundmaking that you can get—or you can provide your own samples from libraries that, likewise, are designed to a professional level. Even if SPI were a $500 or a $1000 package that provided pro sound, I still wouldn’t exclude my best stuff. I mean, if, for example, Aaron partnered with Spectrasonics to provide an integrated SPI-Omnisphere package (I own Omnisphere, but even if I didn’t), I still see most of my “serious” sound time spent with MIDI control of external stuff so that I wouldn’t opt for the pro sound add-on. Equivalently, it wouldn’t make sense to ask Spectrasonics to provide a Sonic Pi-like (or other) composing tool to Omnisphere because its arpeggiators aren’t enough; they’d just refer you to the division of labor.

1 Like

I think you’ve missed a trick here. If you want to use sleep-based loops and have them cue nicely without worrying about total sleep length, then you can run the play/sleep code in_thread and let the live_loop go straight back to a sync statement to wait for the next bar (or however long the phrase is)


If you put your cue after the sleep in your click loop rather than before, then following loop will be set up and ready to play before that first cue has gone out. So they won’t miss the first one. Something like this…

live_loop :a do
  sleep 1
  play :C4, amp: 0.1
  cue :bar

live_loop :b, sync: :bar do
  play :G4, amp: 0.1
  sleep 1

Exactly. And this is also true using a DAW - unless you want to sound like everyone else you’re going to want to have your own collection of sound sources. People tend to build their collection of favs, software, hardware, VSTs, sample libraries. And Spi is great at ‘playing’ those.

Plus the sample-playing functionality is excellent, so you don’t necessarily need to go to an external sample player to do that.

For a good piano sound, I play midi notes out to an Aria sample player with a nice grand piano sample library, with lots of samples at each note velocity. Lovely.

1 Like

I wasn’t 100% sure about that one, so wrote this to check, and yes you can store an object in the time state, in this case a scale (ring). I’ve also shown how to sync up a sleep-based live loop that’s shorter than the bar.

True that you can’t make custom classes, but not sure that’s a barrier to serious music production in itself.

You make a lot of interesting points - I’d recommend reading around the forum and posting ideas, as some may change your thinking. I know that I had plenty of head-scratching moments and misconceptions. The good people here put me on the right path quickly :smile:

I think your general point is valid: Sonic PI isn’t like a DAW. Most of them compete with very similar features, there’s a broad concensus around that set. Myself I now use a DAW for recording and SPi for live work (well, when we can play live again) and a mix for composition. The right tool for the right job. Thing is that SPi is a multi-tool and fills in what the DAW can’t do, or can’t do elegantly.

live_loop :a do
  if tick%2==0
    x=scale(:C4, :major)
    x=scale(:C4, :minor)
  set :myscale, x
  sleep 4

live_loop :b do
  sync :a
  in_thread do
    7.times do
      play y.tick, amp: 0.1
      sleep 1.0/2

Thanks for sharing your perspectives - and thanks for the specific tricks to try out!

The most interesting question raised IMO was around the overall role of Sonic Pi in the electronic music toolkit. It was definitely correct to point out that one should use the correct tools for the job, eg: I do agree that Sonic Pi is about live coding music and doesn’t need to compete with Omnisphere on sound design or have feature parity with any particular DAW.

But coming back to the example of Ableton - despite the stock sounds being familiar to electronic musicians I would say that I could absolutely write a track with stock Ableton synths/samples and non-musicians listening to it wouldn’t think to mention anything was weird or lacking about it (other than that my music itself is fundamentally weird and lacking). Ableton is ready to produce high-quality music out of the box.

But this makes me wonder - what do you think Sonic Pi should be capable, out of the box? I want more than a toy to play with or a tool to teach coding/music basics. Sonic Pi should be ready to produce quality electronic music out of the box:

I want to be able to open a new buffer in Sonic Pi and after 15 minutes of coding, have an interesting track that sounds convincing.

It’s expected a bunch of musicians and hackers have different perspectives on whether that is the case already and/or what might be missing to achieve that, but I wonder if there’s agreement on that being a goal in the first place?

1 Like

I don’t think it would be impossible to build a company, hire some developers, change the Sonic Pi code to a proprietary code base and build something which comes close to what you are proposing if you started charging like Ableton does (Live 11 Intro 79 €, Standard 349 €).

But this is obviously not what Sam intended. So you have a core developer team where nearly all of the developers build stuff in their free time and with enthusiasm, no venture capital, no marketing team, no business model.

Seen from this perspective you are kind of comparing apples and oranges but nevertheless I think also yours is an interesting perspective.


Yes I agree it’s an interesting discussion. And could well help people who are thinking about why they should/shouldn’t use Sonic Pi. As @martin says, it’s apples and pears.

Yes that’s true, you can stay entirely within the Ableton sphere and create all kinds of things, for those who have the skills. But I don’t think it’s being negative to say that different software/hardware tends to lead you in a certain direction - which can be good and bad, and true of all these products. My analogy is early synths and drum machines - think of how the 808 or DX7 became the sound of umpteen hits. Sonic Pi is like that too - maybe not the hits (yet!) but in terms of leading in certain ways.

I’m hearing a lot of incidental music to dramas and documentaries these days that does sound suspiciously similar :smile:

For live work, I started out using the DAW Session Mode, which is good but I found SPi far more flexible and, honestly, more fun. And I do think that the session mode does tend produce a particular kind of output.

I think this brings into play the professional vs amateur question. If I were a pro, then I’d be looking to the quickest route to serve up the goods: Ableton is probably it. But as an amateur, the process matters as much to me as the end result and I like to hand-carve things, have a unique setup, all that stuff.

In that regard, it’ll be no surprise that Reaper is my DAW of choice - because it’s more of a box of parts than a complete car ready to drive away.

The modular synth hobby is similar I think - why on earth would anyone splash out £££ on a Eurorack setup? Because they like it. (I don’t have one btw).

Oh, wavetables - I agree that would be really nice to have inside SPi!!!

I’ll try to do a bit of a brain dump here with some context on these features and what would need to happen:

Live Loop Rationalisation

Already some excellent suggestions above.

Global State

Given the multi-threaded nature of Sonic Pi code, state needs to be handled very carefully which is why get/set are restricted to scalars. I agree with your example about recording and looping - it can be a little cumbersome. In the past I just wrapped everything in my own function but it wasn’t ideal. It’s something that could be looked at.

Audio Buses

I think something like this does exist already, I just need to figure out where.

Control Buses

Totally would be nice - see my comments below about LFOs though

Symbolic Visualization

One of the core team is doing amazing work which should make it easier to add these new kinds of GUI elements in future. Any C++ programmers wanting to have a go at this, please holler!

Instrument Abstraction

Can already be done with functions to some extent, but I agree it would be nice to have an official way.


We’ve talked about this a few times - we need to lay a bit more groundwork for the synths to take a control bus from SuperCollider as an input. It could be added in the same way as _slide is done now but it would be adding weight to some already quite large synthdefs. Not sure what the best option is for this.

Decent Piano Synth

Totally agree - I’m not up on what the state of the art is with piano sounds but I assume that a lot of the good ones require big sample packs. We’ve not got the infrastructure for sample based instruments but it’s definitely possible. If there’s a more lightweight approach that doesn’t require Gbs of samples then I’d be interested to hear about it.


Yes! We actually ship a large collection of wavetables with the code on github (search AKWF) but I never managed to get them working well. I think Ethan had a bit more success recently though. It probably just needs dusting off. The bigger issue is like you mention - how do we make these into primitives in the language that work well with samples and buffers etc. That requires a bit more engineering effort as you’d be writing Ruby to manage something that also exists in SuperCollider. Not impossible, but also not trivial.


Yep, same as the abstraction point above - would be nice to have a proper way to do this.

Personally, I appreciate the perspective especially given that you’ve clearly spent some time in Sonic Pi. I don’t see any reason why it can’t hold it’s own against Ableton, it’s just a case of tackling the improvements step by step. It seems a good point to say also that contributions to the codebase are always welcome. I’ve learnt a ton from working on it over the past 7 years!


Hi @xavierriley re visualisation please please please don’t bloat the GUI with expensive graphics code. A key feature for me is that SPi is low resource use.

Re Global State - scalars only? The example I sketched above stores a ring, and I’m sure I’ve stored FX objects (or pointers to objects) in code before now, I’ll have to check.

I really hope the team resists the pressure to compete with something like Ableton. I urge you to concentrate on what makes SPi unique. If anything, a better route would be to compete with things like BeatstepPro - as a hugely flexible midi hub, controller, sequencer…


You’ve probably considered this already, and I don’t know how much work would be required to build an in-house solution (probably an inordinate amount), not to mention it would be heavy on CPU usage, but. Physical modeling. I use Pianoteq.

Some excellent discussion points above! It’s definitely helpful to get an idea of how Sonic Pi fits with various folks’ workflows and use cases :slightly_smiling_face:
I’ll admit that I’ve not yet fully stretched Sonic Pi’s capabilities (despite being on the core team :joy:). There’s bound to be certain use cases where I’m yet to bump into any potential limitations or areas where further improvements might make certain workflows easier. That being said, I agree with many of your points above @siimphh. Here’s a few of my own comments/thoughts:

  • Global State: as has been alluded to above, it’s not exactly only scalar values that can be stored in Time-State, but anything that is immutable. (Which allows us to guarantee that the state will remain valid in a multi-threaded system).

  • Instrument abstraction: Like Xav says, it can be achieved to a certain degree with functions, but a direct Sonic Pi method would definitely be handy. IIRC Sam has mentioned in the past that he is interested in this idea (and may have been thinking about a way to achieve it? can’t quite recall).

  • LFOs: Sam has mentioned various ideas around this and automation in general. One such idea is to allow the user to provide a waveform which can be used as an automation guide to control synth/fx opts.

  • Wavetables: as Xav has mentioned, I actually implemented a proof of concept for wavetable support a few years ago! :slightly_smiling_face: I shared it with the core team back then, but the implementation was not firmly agreed upon. (@xavierriley, could you remind me of particular points that might have been raised about it at the time?) I actually had a concern about the potential syntax of it myself, given that I couldn’t quite work out a nice command format for it that fit nicely with other Sonic Pi commands. Here is the commit for my proof of concept: (it has not been kept up to date with the main branch, but you should get an idea). I’d love folks to chime in with any thoughts they might have about a suitable syntax for wavetable support!
    Synths - add support for a wavetable synth (WIP) · ethancrawford/sonic-pi@73fe8c9 · GitHub

  • Patches: TL;DR: Agreed, and I am working on this! This is one area that I am super passionate about also :grinning_face_with_smiling_eyes: For a long time, I have been very interested in expanding the variety of synths and fx that are available in Sonic Pi, and the sounds that they are capable of producing. I have slowly been working on a handful of new synths and fx, and have several in various stages of completion. My goal has always been to make sure that they are each capable of producing a variety of sounds, which in turn requires a way to handle presets/patches. In order to do this, I feel that we first need a nice way to display them in the documentation. However, it would be much easier to do that if the implementation of the documentation system was rebuilt - I have been thinking hard about the best way to do so over the last few months, and have most of an idea about how it could be achieved. I look forward to hopefully having the time to 1) make the documentation system easier to maintain, 2) allow synth presets/patches to be clearly and helpfully documented, and 3) make progress on creating interesting new synths and fx for distribution with Sonic Pi :smiley:

Finally, with apologies for potentially sounding like a broken record - like Xav mentions, we’re always keen for contributions from the community - we’d love help to make things like these a reality! :grinning_face_with_smiling_eyes:


Looking through the how-to-contribute and open issues on GitHub. I’ve never worked on a real-life project before, and these tasks look daunting :sweat: Any suggestions for how I might start? Would it be a better idea for me to gain some more programming experience first?

Where does discussion about development generally take place? I have been experimenting with stuff locally (eg: livecode/lib at main · windo/livecode · GitHub and GitHub - windo/trosces: Visual traces for OSC note/percussion/layer events for visualization) and would prefer to be working in directions that are in sync with where the dev team is going to and that could potentially be contributed into the core where applicable.

Hi again folks,

@d0lfyn - unfortunately I don’t have any easy answers. More programming experience is always helpful of course. I know that since Sonic Pi is a (mostly) volunteer driven project, you might not have the time to immediately learn all the technology that might be useful for Sonic Pi development - but at the very least, if it’s helpful, the languages that we have been using are C++ for the GUI, and Ruby and Erlang for the server components. (With the goal to most likely phase out the Ruby server components). There is an existing overview of the Sonic Pi internals on the Sonic Pi wiki, (though it is in sore need of an update, as I intend to do in the near future) See here:

@siimphh: these days the bulk of the discussion around development has been between the core team. There used to be a Sonic Pi room on Gitter (a web based instant messaging platform roughly similar to slack) but this eventually became difficult for us to use. So, other than forum topics here, comments on issues on the GitHub issue tracker, and the occasional comment or question on Twitter, there has not been a standard channel for communication around development with the community.

I understand and agree with your desire as a potential contributor to stay on the same general development path as the core team :+1: it is the same with us as individuals within the team :joy:
The challenge is that the development of Sonic Pi is largely informal and unstructured, so there is not much of a defined roadmap or process. Here are some thoughts I have had personally about facilitating community contributions in several (tiny) ways:

  • open up a publicly visible feature wishlist
  • create a public forum discussion topic specifically for discussing said wishlist (either here or on the GitHub discussion forum on the Sonic Pi GitHub page)
  • update the contributor/developer centric documentation in the Sonic Pi repo (and likely transfer a bunch of it to the GitHub wiki with the thought of making that more central and active for contributor/developer documentation - this includes things like links to the wishlist, READMEs for how to contribute, updated overviews of the Sonic Pi architecture, and anything else that may provide more useful information for community contribution)
  • where feasible, perhaps try to share work in progress on new features or major updates so that people can get an idea of the development choices that we are making.

I’d particularly love folks (core team, community contributor or otherwise) to provide feedback about my above ideas, and suggest any others you might have - it’s definitely not a simple matter to facilitate community contributions to a project like this (even more so in a manner that is easily manageable by Sam and a small team of volunteers) so anything we can do would be good.