Hi! Hello from Australia! I really love Sonic Pi but our live coding community is very small and dispersed so I have really been ruminating on these questions, any help is greatly appreciated!
I’ve been diving deep into the forum to find some answers but I’m having a little trouble so thought I’d post here feel free to reply with links to other posts if I’ve missed them!
Context: I have my first live coding performance coming up in a little less than a week (which is very exciting). I am a vocalist and will be doing an improv set with Sonic Pi to accompany me, with my vocals running through :live_audio. I will be performing genres spanning ambient, experimental and RnB. I’ve been using Sonic Pi for a couple of years now but my technical coding knowledge is not super deep! I did one foundational subject in uni so I’d love if you could please explain any code responses if possible
Exact setup/what i hope to achieve:
*my laptop is a macbook pro M1, 16GB ram
*I have a Focus Scarlet 8i6 interface which will connect my laptop to my microphone (Shure Sm58) and midi keyboard (Arturia minilab mkII)
*my vocals will be processed by Sonic Pi with fx’s on the :live_audio (this works fine BUT would love any tips here in transitioning songs and :live_audio not turning off, and additionally avoiding feedback in a live setting)
*sonic pi will be receiving midi messages from my keyboard (this works well)
*connection to a projector in the venue, q: how do people play visuals so seemlessly behind the program without seeing the taskbar/youtube video? I know how to make SP transparent but yes if I fullscreen I havent worked out how to fullscreen the youtube video graphics behind it and at the same time
*the venue has a sound desk for mixing, speakers and foldback speakers for me to hear myself
- **my main question/worry is: how do I send the audio from each loop to the sound desk to mix OR to a hardware mixer to mix myself? When live coding at events/algoraves how do people connect to the sound at the venue? is there an acceptance of mixing as you are coding via amp parameters for e.g.? ** (Although I am worried mixing myself may be difficult as the audience experience will be different to my sound at the foldback speakers…)
Potential solutions I’ve been made aware of from in_thread or have been thinking about (but may not work too effectively):
-
using sound_out to something like TouchOSC to program my own buttons/faders for amp and cutoff - awesome! but how will the sound engineer have access to this? Is there potentially an easier tool (time permitting) with simpler code perhaps?
-
could I use a hardware mixer with nobs and aux channels e.g. 10 channels, connected to my scarlet interface and sending audio from sonic Pi with sound_out? if so, amazingggg, how would I specify to sonic pi the paths/names and which channels to send sound to? (something like below)
-
I also use Ableton and have sent out midi messages to Ableton synths using the IAC driver, is there a way to use sound_out from SPi to an Ableton audio channel perhaps? then I feel it could be really easy to map my midi controller to parameters and mix quickly/easily as I code and perform…
bonus question/enquiry: would really love how people set up their live performances to make them as easeful/seemless as possible, and any other tips for mixing or logistics of performing a live coding set