As a kind of follow-up to some of my comments (on the Google Group) about latency on the Raspberry Pi…
Finally got my Blokas pisound HAT, yesterday (left Lithuania on August 3). As hoped, it does decrease the latency quite radically. And, of course, it makes for a pretty clean sound, without the annoying background noise from the Pi’s analog out.
Was able to try all my crazy script meant for the WX-11, such as this counter-motion one or this other one with ring modulation controlled by lip pressure. Completely playable, unlike the HAT-less version. So, in a way, this is my reporting a solved problem.
Something a bit funny (which might require a bit more investigation) is that it sounds like there’s a difference between two ways to use the pisound output with Sonic Pi 3 on Stretch. First used the method described in the Blokas FAQ: change scsynthexternal.rb
(which moved but is easy enough to find) to point to boot_server_linux
instead of boot_server_raspberry_pi
and then activate JACK on pisound (through qjackctl
) before powering Sonic Pi. That method worked really well in terms of making the latency unnoticeable.
After using this method, wondered if the changes in both Sonic Pi and Stretch made this workaround unnecessary. So, reverted scsynthexternal.rb
to use boot_server_raspberry_pi
and launched Sonic Pi without doing anything with JACK. Lo and behold, it works. But, for some reason (and it might be my perception), it sounds like there’s a tiny bit of latency. Nothing to make my scripts unplayable, but there was something there. Will experiment again, but it’s part of my overall impression with using pisound and Sonic Pi together.
What’s interesting to me about latency is that it’s not typically something which bothers me. Sounds like my threshold is pretty high. For instance, haven’t noticed much latency with Bluetooth devices, including headphones and MIDI controllers. Even with, say, a Lightpad Block and a Philips headset (SHB5500), there hasn’t been enough latency to bother me, which sounds really strange as the latency adds up from both the input and output sides. But with my WX-11 wind controller driving Sonic Pi on a “plain” Raspberry Pi 3, the latency was really just too much, even with a very simple script.
As always, this is probably too much for anyone here. But my notion is that it could eventually help other people. So, maybe some people think this is on the verge of OT. If so, sorry. It’s just that, in my experience, not thinking too much about how on-topic something is (but thinking about things being in_thread
) is part of the reason it’s so neat to use the Raspberry Pi.
Since v3, there is no need to modify scsynthexternal.rb
to get the Blokas board working. This is because Sonic Pi no longer force-resets jackd when booting on a Raspberry Pi (as it did previously). Instead, if jack is already running it will simply use that and assume you have set it up correctly yourself. This of course has the negative impact of forcing the user to have to understand how to effectively configure jack for their specific sound setups for the lowest possible latencies - a non-trivial activity.
Unfortunately I no longer have the resources to be able to spend on getting all possible configurations for RPi audio-setups into Sonic Pi itself so users don’t need to worry about this. It would be a large effort and require constant maintenance. In fact, Raspberry Pi themselves seem to be pushing this attitude too as for v3 they specifically requested I remove the generic audio selection part of the GUI (the bit that used to allow users to switch between HDMI and headphones).
I’m hoping that in_thread can be a place to start discussing and documenting at least the common audio setups for all operating systems and figuring out ways to get the latency down where necessary. It turns out that if you’re doing standard live coding or composition, audio latency makes essentially no difference. However, if you’re working with live inputs such as MIDI/OSC or audio then it really does make a difference.
Looking at the differences between the two booting routines (Raspberry Pi vs Linux) there is a small difference to the boot flags for scsynth which might be affecting relative latency. The Raspberry Pi is setting the internal block size to be pretty sizeable - 128 which will have the effect of limiting crackling due to xruns yet increasing latency. It appears to be the case that for at least lower powered CPUs you have to effectively trade between improved synthesis abilities and low latencies. Feel free to experiment with this setting yourself - the values just need to be powers of 2. The line can be found here: https://github.com/samaaron/sonic-pi/blob/a96d06d0f864cc5d00a61d420cbe165f12da761e/app/server/sonicpi/lib/sonicpi/scsynthexternal.rb#L359
(You can find this file within /opt/sonic-pi on your Pi).
One of the reasons v3 for Windows hasn’t been released yet is because I’m not happy with the latency situation over there. It seems like Windows has a lot of variability in audio latencies due to both a plethora of different audio hardware and a range of different audio APIs. The defaults seem to be pretty dire in many cases too. So until I’ve figured out a nice way for users to easily tweak Sonic Pi to match their audio config to easily get the lowest latency their system is capable of delivering, I’m going to keep designing and hacking…
All this to say that understanding and working with latency is very important to me and for Sonic Pi
’Figures. That it might be the case, but still went with the official documentation (which probably hasn’t been updated since the Stretch release). The Blokas community surely knows, but most people set up their pisound HATs before the SP3 release, most likely.
Unfortunately I no longer have the resources to be able to spend on getting all possible configurations for RPi audio-setups into Sonic Pi itself so users don’t need to worry about this.
Unfortunate indeed. The Raspberry Pi Foundation would likely benefit from those efforts.
At the same time, your own focus should lie elsewhere. If, say, some powerful organization were to finally realize the benefits of your work and manage to fund it, getting somebody else to deal with these diverse configurations and letting you focus on other issues would be quite logical.
Raspberry Pi themselves seem to be pushing this attitude too as for v3 they specifically requested I remove the generic audio selection part of the GUI
Ha! So, that’s what happened! Was wondering about this. Not that it was so useful to switch the output in the SPi GUI, but it threw me off while troubleshooting.
I’m hoping that in_thread can be a place to start discussing and documenting at least the common audio setups for all operating systems and figuring out ways to get the latency down where necessary.
Nice thinking! And, in fact, this is precisely the reason organizations based on developing software make use of such forums. Will keep this in mind while investigating.
It turns out that if you’re doing standard live coding or composition, audio latency makes essentially no difference. However, if you’re working with live inputs such as MIDI/OSC or audio then it really does make a difference.
Quite so! Most DAWs have some buffer-related setting to decrease latency for live use (and to decrease artefacts when no live input is used).
Looking at the differences between the two booting routines (Raspberry Pi vs Linux) there is a small difference to the boot flags for scsynth which might be affecting relative latency.
Really nice to know that it may not just be my imagination!
Feel free to experiment with this setting yourself - the values just need to be powers of 2. The line can be found here: https://github.com/samaaron/sonic-pi/blob/a96d06d0f864cc5d00a61d420cbe165f12da761e/app/server/sonicpi/lib/sonicpi/scsynthexternal.rb#L359
(You can find this file within /opt/sonic-pi on your Pi).
Will do so! Thanks a lot for that tip!
You know, there are times when my focus is on musicking away, without wanting to care so much about the tool. On those occasions, something like this can decrease my motivation to play. Other times, though, it’s fun to experiment with things like buffer sizes… or the code needed to play polyphonically.
Experimenting with the Blokas is more about the latter, at least in the beginning. There’s something quite motivating in having something which works decently well out of the box but still manages a lot of complexity in terms of playing with the code.
So, won’t take this as an assignment for myself, but it’ll be among my experiments in the near future. Since it’s a long weekend (Canadian Thanksgiving), it might happen very soon.
Again, thanks for this help!
Ok, I feel like I need to get in on this thread. I wasn’t sure if I should be asking PiSound questions here but I have a lot of them. So… 128??? That’s like a unicorn figure for me. I have it set up at 48000 with 2048 buffer 3 periods and still get xruns. I thought maybe I was waiting on some updates because it’s so new but it sounds like y’all have SP running amazingly.
I have been assuming that the difficulty I have with keeping MIDI stable is due to the newness of it. MIDI out seems to require a lot more resources than synthesis which seems odd to me. I was noticing early on that I can get about 45 mins of MIDI out of SP. Beyond that the “slightly behind” message turns to the red message, and if I stop and restart the buffer, nothing will play correctly until I restart Sonic Pi altogether. I was trying clear and reset commends but only a restart seemed to resolve it. While I can optimize one or the other to not crackle (which I care a lot more about than latency) it seems like mixing MIDI and SC is more than I can handle at max buffers, even with a very rudimentary pattern.
I had no idea that I didn’t need to do the PiSound workaround. My PiSound came about 3 days after 3.0 came out, and the day that Stretch came out (I later realized) so I’ve been very confused about what advice to follow to get the PiSound stable. I have no idea if I should be rolling my own RT kernel or if that’s done by the initial script, or really any other steps I can be taking to get the most performance out of my Pi/PiSound.
Also, any idea why, after plugging in a USB-Midi device a few days ago that I now have to manually connect the MIDI-through port to the PiSound port inside of jackd. I guess as per the above, I don’t need to deal with jackd manually, but if I’m getting no MIDI response without intervention on Raspberry Pi mode, I should still get an idea of how to do that. I kinda learned jackd at its most basic, but this is the first time I’ve been seriously routing a studio with it and I find I still have a lot to learn.
Got a somewhat similar experience. Haven’t investigated that fully, but it does sound like MIDI in SP is pretty intensive in terms of timing (while audio out has never been an issue for me). Haven’t done much with audio in (and have yet to try it with the pisound HAT). One would expect that to be a lot more intense than MIDI I/O.
Something interesting, to me, is that the performance doesn’t tend to change much with the complexity of the script. Having as much latency with the example script as with a complex one with multiple FX applied to the sound, modulated in by CC#2 and pitchbend. Also, it doesn’t sound like the CPU is maxing out or anything like that. So maybe the memory is at stake? In my experience, that’s been the main limitation on the Raspberry Pi 3, including with things like Chromium or Firefox (on Ubuntu-MATE). Wonder if the Raspberry Pi 4 will have more RAM.
Can’t really provide much advice about the best pisound-based setup, having had limited experience with it. But using the documented tweak (running the Linux boot server instead of the RasPi one) worked really well with my MIDI devices (Yamaha WX-11 wind controller and Alesis Vmini; have yet to try it with my MPE devices). Apart from the tweak Sam suggests, don’t think much needs to be done for the pisound HAT to work well with Sonic Pi.
Would you care to share a script which produces significant latency on your system? Might check things with my own setup.
Well, I’ve had some time to change a few things today. I have a marked improvement. I just tried playing the track that was really giving me issues, and I’m getting nominal xruns now. I switched back to Raspberry Pi Server boot. Now I’m still a little unclear on how much it looks at jackd to work out settings in that mode. I did have to start up qjackctl to get the PiSound audio in connected to SuperCollider audio in, but other than that everything else was working with me having to start jackd. Is there some way to do that without opening qjackctl? Assuming it’s following the settings in the current jackd settings, I now have 192k audio running at buffer size 256 and I’m getting about 1/10th the number of xruns. About 11 in a minute vs the 100 or so I was getting.
So I’m at least at the point where OP is now. I have a pretty hefty MIDI latency despite showing only 4ms. I feel like right about here is maybe where I need to understand use_schedule_ahead and use_realtime, and anything else that may come into play when trying to align MIDI with Supercollider. …I just tried setting my use_schedule_ahead to 0, and now I have it pretty well lined up but I’m getting “can’t keep up” errors. Closer.
@kniknoo Please read the MIDI in section in the built-in tutorial (11.1) - particularly the section on removing latency:
Removing Latency
Before we can remove the pause, we need to know why it’s there. In order to keep all the synths and FX well-timed across a variety of differently capable CPUs, Sonic Pi schedules the audio in advance by 0.5s by default. (Note that this added latency can be configured via the fns
set_sched_ahead_time!
anduse_sched_ahead_time
). This 0.5s latency is being added to our :piano synth triggers as it is added to all synths triggered by Sonic Pi. Typically we really want this added latency as it means all synths will be well timed. However, this only makes sense for synths triggered by code usingplay
andsleep
. In this case, we’re actually triggering the:piano
synth with our external MIDI device and therefore don’t want Sonic Pi to control the timing for us. We can turn off this latency with the commanduse_real_time
which disables the latency for the current thread. This means you can use real time mode for live loops that have their timing controlled by syncing with external devices, and keep the default latency for all other live loops. Let’s see:live_loop :midi_piano do use_real_time note, velocity = sync "/midi/nanokey2_keyboard/0/1/note_on" synth :piano, note: note, amp: velocity / 127.0 end
Update your code to match the code above and hit Run again. Now we have a low latency piano with variable velocity coded in just 5 lines. Wasn’t that easy!