Record and quantize Midi notes

Hi,

I am trying to figure out, if the following is possible:

  • get midi note input for the duration of e. g. 8 or 16 beats
  • replay these beats notes quantized after ‘recording’

I have no real idea how and whether this is possible. My first try is less then a proof of concept and lacks basic features such as rests:

live_loop :get_midi do
  use_real_time
  notes = []
  i = 0
  4.times do |i|
    k, v = sync "/midi/arturia_minilab_mkii_midi_1/1/1/note_on"
    notes.push(k)
    i += 1
    sleep 0.125
  end
  n = notes.ring
  set :my_notes, n
  puts "Array: #{n}"
end

live_loop :play_midi do
  synth :piano, note: get(:my_notes).tick
  sleep 0.25
end

In my imagination I could record midi input for a certain duration with a rather high time resolution (such as 16 possible slots per beat) and replay these notes after the input phase within a time grid e. g. 16th notes.

Something like that:

# Midi Input loop / recording
# time grid based on 0.0625 sleep time
|.  :c .  .  .  .  .  :g .  .  .  .  .  :a .  .  |


# Midi Output loop / replay
# time grid based on 0.25 sleep time
|:c           .           :g           :a           | 

I would appreciate any hints. I am also not sure that my idea as such is well-conceived.

Martin

2 Likes

Hi Martin,

your idea is well-conceived :slight_smile:

In fact, there’s no need to explicitly record anything as Sonic Pi already automatically records the most recent events for you and stores them in the time state system.

However, where Sonic Pi falls short is providing you with simple ways to work with that existing automatic history of events.

Therefore, would it be possible to sketch the kind of code you’d like to be able to write and we can use that as a starting point for a discussion. I think it’s much easier to discuss where the end point would be for you and then figure out how to get there together with you then verifying whether the implementation(s) we come up with actually do help you :slight_smile:

1 Like

I often think about this use case as well. Not sure how best to indicate a “start” point in the recording though. Maybe just first note? Or could see giving the recording loop a duration that always records/overwrites new events similar to how a GarageBand loop works in recording mode.

@samaaron is there any good precident we could mirror from live audio recording/looping? I still haven’t quite figured out how to work the audio buffer feature.

@samaaron, @matschaffer

Hi Sam, hi Mat,

thanks for the encouraging feedback! At the moment I don’t think, I can flesh this out with more code. Basically I am referring to a similar functionality as in my Live Looper (which at the moment is not more than a proof of concept and could definitely be coded much more elegantly; but it works):

  • you have recording tracks available (currently 4)
  • you can set a loop length for each track (= length of recording buffer)
  • you can arm a track for recoding involving a metronome, which tells you where the loop starts
  • you can then start the recording; the metronome will give an indication about when the recording will start (i. e. after the choosen loop has finished its current run). Right now it counts in advance half the loop length and also tells you when the application stops recording. The touchosc interface also gives visible feedback.
  • the replay of the loop will start immediatelly after the recording

I would like to have the same using midi input.

I do hope that informations is fruitful for some further discussion.

My project Sonic Pi 3 Record/Player using TouchOSC worked along similar lines, without the quantisation. Also I used osc messages to send the note info rather than midi.
The way that I calculated the play times for the notes may be of interest.
I recorded note input in realtime, with an audible metronome to assist timing accuracy. I could then replay it a the same tempo or faster. I could also play a further part at the same time to give some harmony. I also recorded some envelope and volume information as well.

It would be interesting to play around with something similar using external midi input. It would be great to be able to get back time state info since it is already there, rather than having to calculate timings, but this would require some digging around in the SP code to make the info available and usable.

Thanks @robin.newman, I’ll try to have a look at your code. Shouldn’t be so difficult to use midi input (instead of OSC).

Concerning the quantization: I am really interested in this, because it seems quite difficult to play live with a correct timing. (I imagine some sort of recording and looping function beyond what’s already there would make a great SP extension.)

Besides that I found that there is some latency involved in my Live Looper. I recently found out, that it might be due to the controller I used (in conjunction with a softsynth). I had the opportunity to use a proper hardware synth recently and as far as I could tell there was no more or less latency.

I probably will not be able to dive into the Sonic Pi code. I am trying to catch up with you guys to gradually become at least a sufficiently decent coder but I am still more concerned with the musical side so my progress in coding (beyond the Sonic Pi language set) is very slow.

Hi @samaaron,

I know, this is sort of an ancient thread but the idea is still relevant to me. Also I haven’t ever delivered a sketch or some pseudo code to illustrate what I was thinking of; so I catch up on that with the following.

I am sure that my sketch does contain loads of conceptual flaws and impossibilities due to my lack of proficiency but on the other hand - it is just a sketch and might be able to clarify ideas.

Basically I imagine the possibility to capture incoming midi data just in time (like notes played on a keyboard); these might not be in (musical) time (depending on how well trained the player is). As I am also quite fond of the idea of using this input while live performing I image that you can configure a number of beats or bars you would like to record incoming input.

I am not really convinced that my sketch is well conceived within this respect, but I just started from the following idea: if the listening loop runs for 2 bars and another loop contains a sync statement in the loop header and also runs for 2 bars it’ll will start 1 cycle after the listening loop (right?). So after the listening loop has delivered incoming midi stuff, the second loop will play it (I am sure, even if this might be correct, the devil is in the detail).

So, based on the idea that a second loop should start somehow synced with the listening loop but shouldn’t play the events in recorded-real time but quantised. So I imagine some kind of quantisation feature which I have injected into the ‘at’ block as an additional parameters. I am aware of the fact that I took the absolute time events (which you can see in the debug_log) and secretly converted these to something which matches the fact that ‘at’ starts at 0 (I hope, I made myself clear on that); so this will definitely not be as easy as I am imagining. Anyhow, the quantise function is controlled by two parameters: ’ grid’ which means: quantize to what?; and ‘amount’ which is a sort of humanizing factor. So here is my sketch:

live_loop :get_midi do
  use_real_time # listen in realtime
  
  midi_pitch = []
  midi_vel = []
  midi_time = []
  
  # sync will get pitch, velocity and time of the midi event
  p, v, t = sync "/midi*/note_on"
  midi_pitch.push(p)
  midi_vel.push(v)
  midi_time.push(t)
  
  # this is rubbish, see next remark - but should be
  # imagined as a sort of buffers storing all incoming midi events
  set :midi_pitch, midi_pitch.ring
  set :midi_vel, midi_vel.ring
  set :midi_time, midi_time.ring
  
  # this obviously does not work but should
  # indicate that somehow the :get_midi should listen
  # for a certain amount of beats (here 2 bars of a 4/4 beat)
  # and fill midi data buffers with all that happened during that time
  sleep 8
  
end

live_loop :play_midi, sync: :get_midi do
  # I imagine something like 'at' with additional parameters for quantisation
  # These could be 'grid' = 'quantise to' whereas the number is the devider of 1
  # so that 4 = 1/4 = sixteenth note and e. g. 3 would be triple eights aso..
  # Amount would be the probability of whether or not an event would be quantised
  at (get(:midi_time).tick, grid: 4, amount: 1) do
    midi note: get(:midi_pitch).look, vel: get(:midi_vel).look
  end
end
1 Like

I just started playing with Sonic Pi and this was one of the first things I wanted to do: I have a midi keyboard and I’d love to get the code for a sequence I played in Sonic Pi format. Slowly trying to figure this one out, time to dust off the keyboard.

1 Like

I had another look at this and modified my keyboard midi player to record and subsequently replay midi in Sonic Pi. IT has all the data. Hopefully could try and modify the data to quantise it.
code is here

see video at

2 Likes

Hi Robin, just want to let you know that I am looking forward to exploring your code (very much appreciated!). Currently I have a lot of work so there is limited time to do that. But eventually I will…

That’s fine Martin. I did do a bit more on it and added some crude quantisation. I then tried processing a longer midi file with it and it fell over eventually through lack of resources. Couldn’t generate enough threads. I also set it up to record the input into a json file which could be replayed, using existing code from my record/player project which was not so dissimilar.

Hi Robin,

as you put in a lot of energy to progam and document I wanted to let you know about my (non-)progress. The thing is that I am a bit disillusioned about what’s currently possible to build on top of resp. with Sonic Pi (for use in a live context). Actually I got quite far integrating the Monome grid into an application which

  • provides 4 tracks of sliced samples with quantised playback
  • can play back parts (selected slice successions) of those tracks and also reverse the playback selectively
  • can play, stop, mute and controle the volume of each track
  • on a separate page you can choose a sample from a sample bank (directories on your harddrive) and control most parameters, which the sample command in SP provides (pitch, rpitch, beat_stretch, rate, hpf, lpf aso.) for each track
  • you can record audio input on 4 other tracks (the live looper functionality)
  • you can then route the input of on of these live looper recordings as input for a sample playback track, thus slicing the just recorded sample and/or overlay the recorded sample with a sliced version including all the mentioned playback and manipulation features
  • there is also a feedback loop, recording audio input and playing it back immediately with configurable decay …

On top of this functionality you could potentially live code, take the recorded stuff from SP’s cache, manipulate it via live coding, rerecord it, add other live coded stuff, control bits of that via coding and other, tedious or stuff that is time-consuming to code in a live situation via the Monome (or any other controller) … and in this context I would have liked to include not only audio but also midi recording (with quantization) … well, that was my vision.

Well, I did put a lot of time into that and got quite far; the old version is on Github since quite a while (and this video shows some of functionality). On my harddrive is a much updated version, which I have not yet finished and pushed. I stopped working on that because SP’s performance does not support this application - at least not in the way I am able to code it. It has become much better since Sam and others addressed the memory leak issue but I can see that I have reached a level of code complexity with respect to performance, that does not make any sense to add more features because already the performance and stability of the application is sketchy and not really apt for a live performance situation (let alone dreaming of live coding on top of that, which was my plan).

As I now have some experience with Norns software (running on a Raspberry 3/4 DIY based platform Fates) I know that quite complex and impressive grid-based applications (sequencers, samplers, synths aso.) using Supercollider are possible and run very smoothly. I am sure it is not a hardware question. It could be my coding but I believe this is only partly the case. So for the time being I resign to code applications with SP and just stick to the live coding while I am trying to integrate other platforms and gear to pursue my vision.

The upside is that I have learned a lot about coding :wink: .