Record and quantize Midi notes

Hi Robin,

as you put in a lot of energy to progam and document I wanted to let you know about my (non-)progress. The thing is that I am a bit disillusioned about what’s currently possible to build on top of resp. with Sonic Pi (for use in a live context). Actually I got quite far integrating the Monome grid into an application which

  • provides 4 tracks of sliced samples with quantised playback
  • can play back parts (selected slice successions) of those tracks and also reverse the playback selectively
  • can play, stop, mute and controle the volume of each track
  • on a separate page you can choose a sample from a sample bank (directories on your harddrive) and control most parameters, which the sample command in SP provides (pitch, rpitch, beat_stretch, rate, hpf, lpf aso.) for each track
  • you can record audio input on 4 other tracks (the live looper functionality)
  • you can then route the input of on of these live looper recordings as input for a sample playback track, thus slicing the just recorded sample and/or overlay the recorded sample with a sliced version including all the mentioned playback and manipulation features
  • there is also a feedback loop, recording audio input and playing it back immediately with configurable decay …

On top of this functionality you could potentially live code, take the recorded stuff from SP’s cache, manipulate it via live coding, rerecord it, add other live coded stuff, control bits of that via coding and other, tedious or stuff that is time-consuming to code in a live situation via the Monome (or any other controller) … and in this context I would have liked to include not only audio but also midi recording (with quantization) … well, that was my vision.

Well, I did put a lot of time into that and got quite far; the old version is on Github since quite a while (and this video shows some of functionality). On my harddrive is a much updated version, which I have not yet finished and pushed. I stopped working on that because SP’s performance does not support this application - at least not in the way I am able to code it. It has become much better since Sam and others addressed the memory leak issue but I can see that I have reached a level of code complexity with respect to performance, that does not make any sense to add more features because already the performance and stability of the application is sketchy and not really apt for a live performance situation (let alone dreaming of live coding on top of that, which was my plan).

As I now have some experience with Norns software (running on a Raspberry 3/4 DIY based platform Fates) I know that quite complex and impressive grid-based applications (sequencers, samplers, synths aso.) using Supercollider are possible and run very smoothly. I am sure it is not a hardware question. It could be my coding but I believe this is only partly the case. So for the time being I resign to code applications with SP and just stick to the live coding while I am trying to integrate other platforms and gear to pursue my vision.

The upside is that I have learned a lot about coding :wink: .