Creating (monome) applications with Sonic Pi

Hi, since a few months I did a couple of tests ceating Sonic Pi applications such as a Live Looper and a grid based sample player. Thereby I experimented with TouchOSC and Open Stage Control. The idea is to somehow combine live coding and the use of selected applications (preferrably self programmed); on the one hand I very much like to create music from the scratch and improvise, on the other I also like to live record or use some preconfigured material to get things going or to add complexity.

As much as I like the mentioned software tools to create custom Midi/OSC controllers I tend to somehow opaque but very versatile hardware interfaces like the Monome grid.

You can find a variety of applications for the monome, much of it done with Max/MSP. I do consider my aforementioned experiments as preparation to start and transfer some of these ideas to Sonic Pi. One of the widely know monome applications is mlr, a sample cutting platform. I felt kind of inspired to combine my Live Looper with at least some of the functionality, which I can see mlr has or suspect it to have (I never had a chance to use it but saw quite a few videos; I still do not have a clear idea of what it does or can do if it comes to ‘recording’).

Furthermore there are also divices such as Norns (also a monome product combining Supercollider and LUA into a multifunctional music device) as well as the Organelle by Critter & Guitari going into a somehow similar direction. My idea was that all this should be possible also with Sonic Pi (and like the aforementioned platforms also on the base of Supercollider).

So this is the background of mlq (working title echoing the ‘naming convention’ for monome hard- and software); you can find the code and some quick and dirty documentation on Github. All of it is early Beta: the documentation is rudimentary only meant to give enough information to state my case. The interface is not more than a draft but sufficient to control the application. Remember, my plan is to use some sort of grid controller.

The problem with all this is, that I quite early ran into some serious performance issues. I did a quick demo:

It shows on the one hand some of the features of mlq and on the other hand the limitations of the current implementation: You will hear some time issues finally resulting in Sonic Pi refusing to go on. This can surely have the reason in my poor coding.

On the other hand, if this is not or only to a smaller part due to my code, I don’t see a chance to go on with that and I will abondon my plan to create applications like that with Sonic Pi.

So I am very much interested in your opinion on this issue.

2 Likes

Hi Martin,

one of the goals of Sonic Pi is to enable people to develop applications like this - and using the monome is a specific interest of mine.

Are you able to reduce the performance issues you’re observing into something small and easily reproducible? It would be great to see if any internal improvements could be made to fix this.

1 Like

Hi @sam,

well, I can try to remove the dynamic live loop generation stuff. Basically there are

  • a couple of live_loops listening to incoming OSC in real_time (about 8),
  • 4 live_loops each with relatively small runtime playing the sample slices,
  • 4 recording live_loops (runtime depending on configuration but assumably 4, 8 or 16 beats long) and
  • 4 live_loops playing the recorded stuff (of course same runtime as the recording loops) and
  • 2 live_loops for the metronome

Of course the load depends on the setting for the bpm and the quatisation of the sample slices (in the video demo bpm is set to a moderate 90 and sample slice quantisation is set to 0.25).

Would it make sense to create a static setup according to the list above?

Just try your best to find the smallest system that that’s both easy to reproduce on other machines that exhibits your problem. Then I can see if I can replicate it locally :slight_smile:

Hey
i am a new user and i have a monome grid and a arc that i would love to use with Sonic-pi for classroom demonstration of how to extend basic coding ideas into applications. i think this is a great start Does it work with grids presently? I have a 128 varibrite version

Hi @shreeswifty,

I don’t own either of those devices to verify, but if their OSC interface works as advertised, you’ll be able to receive events (like button presses) and send messages to change device state (like toggle LEDs). Here’s more info on what this looks like in Sonic Pi: https://sonic-pi.net/tutorial.html#section-12

Hi @shreeswifty,
not yet. @perpetual_monday is right, it is based on OSC and so it works in principle. But: Lacking a monome I created my own interface with Open Stage Control. You would have to map all functionality to the monome grid (which involves e. g. to recreate rotary controls with grid rows, mapping buttons to several functions and much more like converting push to toggle events); actually my plan is to buy a monome and do that; the above script is much like a finger exercise and proof of concept. But, as you can read, I am struggling with performance issues. If these can be solved it’ll be a worthwhile undertaking and I hope some much better coders than me will join and help.

I have a 128 varibrite monome grid and a new arc and i would love to get them working with Sonic-Pi. My Students love this app so i need to indulge it and get a few things popping.

I just finished adapting a hybrid monome-ish controller, a livid Block controller for MLR based on code gwen Coffey originally wrote for monome. I collaborated with a Finnish programmer to get it working. So it really takes two people coding it together to work bugs out quickly.

Is there someone else on here that has created a basic grid buffer code for a grids? I have the summer to code so im excited to see if i can get this working


here’s the code we adapted and it works perfectly with midi learn etc. Livid is kind of Midi/OSC hybrid so i think a grids might a next step.

I’m very interested in what you’re doing here and will try to help as I can.

It would be nice to understand the technical cause(s), but maybe you can work around the problems by restructuring your code.

I glanced through your code and wonder if you could consolidate a bunch of the loops. You have several loops all doing event handling, perhaps those could be consolidated into a single dispatch loop with a case statement. (I wonder if numerous use_realtime loops is problematic (…in realtime))

It looks like you also automatically create 8 loops for the recording and playback but some or all of those will be doing no-ops a lot of the time. You could only create them on demand to help ease the load, though I understand having 4 rows playing simultaneously is a basic requirement that will need to be achieved. I wonder if you could also consolidate the playback and recording loops into one for each.

Hi @perpetual_monday,

thanks a lot for your comments. As I am slowly progressing to produce somewhat decent code (emphasis is on ‘slowly’) I am very grateful for analysis and concrete hints. Disclaimer: As I said, this is very much beta and currently I am glad (and a bit proud) I made it working so far in the first place. Having said that, of course you are right, there are surely quite a few options for optimisation. Hopefully I will have some time at the weekend to start doing that…

Nevertheless let me refer to some of your points in a more concrete way:

  • Consolidation of the listener loops: definitely. This is due to my lack of experience and coding practice; I don’t know how much performance is needed for the real_times but I supect it could be quite a lot …
  • automatic creation of loops for playback and recording: Well, it took me quite some time to figure out a way so that this acutally worked (first version was in October '17);
    • one of the problems was the synchronisation of the recording and the playback. I finally arrived at a solution where all of these loops constantly run.
    • Actually to me it seems quite a challange to start and stop loops within the context of such an application (refering to the idea to only use one recording loop). Right now I don’t have an idea how to do that (problem is not to stop but to start; I will do some testing …). But you are right: even though all playback loops potentially run together, in the current implementation I only need one recording loop at a time. So this might be a point to start.
    • Nevertheless: I am kind of reluctant to start optimising the playback/recording mechanism, because acutally these loops are like not-very-performance-demanding (granted though, that things add up and optimisation should take place wherever it is possible); they do run for four or eight or even more beats; it seems to me the sample slicing loops are much more demanding because the do only sleep for 0.25 or even less beats. (My first version had 8 tracks for sample slicing corresponding to the 8 rows of a 8 x 16 grid; but it did not include the live looper functionality).

So far, and thanks again for your much appreciated input!

1 Like

I agree - that looks a good candidate to isolate and study (sample slicing at short intervals). If my math is correct, then dividing quarter beats by 6 at 90 BPM leaves a duration of ~28ms. Enough time for a few tasks but little room for error / overhead. My understanding is that the beat stretching and some of the other sample operations could be relatively expensive so (if that’s true) it seems reasonable to expect loops in that mode to get unstable.

1 Like

Ok, prior to more optimisations I reduces the endless number of osc listener loops to just one :wink: Performance is much better now! Thanks @perpetual_monday for pointing me into this direction!

1 Like