I am a rubyist, so when I realized Sonic Pi was in ruby, I knew I’d found something I think I could spend lots of time in.
For years now, I’ve been interested in the “bedroom DJ” sort of setup - low cost controllerism, effectively. Partly because I know this is a hobby that I may fall in and out of and I don’t want to spend hundreds or buy crappy equipment. Also partly because trying to convince my wife pouring hundreds of dollars into my hobby when I can’t even string a few beats together yet is a non-starter
Recently, I discovered Unfa on youtube (link). I’m a CTO of a municipal government by day, but my true love is all things Linux/open-source.
So - in short, I’m a career programmer, newb to music production, and I’m cheap. Unfa’s guides to getting Reaper, Jack, etc are a perfect excuse to use ALL THE SKILLS, and I’ve been having a blast learning everything I can. I also discovered Output.com’s Arcade VST, and managed to get that working in Reaper via LinVST.
All that said, at this point, I’m finding it hard to take some of the samples I am creating from Arcade, and working them into some sort of live performance setup (my true goal - I just want to be able to have an itch and jam out one night, with a possible recording if I ever feel so confident). I’ve looked up several free native samplers using Midi control, but they’re all yet-another-layer-to-learn.
I know what I want in my head. And working through the tutorials on Sonic Pi and watching @samaaron here (I keep this on loop lately) , I also can generally see how I’d code it.
But I can also see a few areas where I’d want to tie in an OSC interface - for instance, I see Sam adjusting amp: params frequently (adjust, run, adjust, run, adjust, run back to back). I personally would like to create simple helper methods/singletons that allow me to set these sorts of parameters with dials - a hybrid live coding/device performance, if you will.
For instance, I could set each buffer up to have a slider in Open Stage Control (O-S-C) that is synced to a shared state in each buffer. I could wire up virtual X/Y pads to FX params, and while live coding, swap in FX settings to other parts, etc. Sample management interfaces, etc, so I can use general keywords like “@melody” as opposed to a sample name, while using an interface to swap out which melody is represented by that keyword. These may all be awful ideas, but I know until I try them I won’t get rid of my “itch”
My head is swimming in very interesting (to me) ideas like this.
I haven’t been able to devote the time lately to dive deeper into Sonic Pi’s ruby capabilities, but it all makes me wonder just how much ruby I can use in a Sonic Pi run?
For instance, can I install gems and require them and use them within Sonic Pi? Obviously it’s not going to be very portable/sharable - but my goal, for now, is not to worry about that so much as it is to make music I want to listen to.
I don’t have a lot of questions here, but would love any thoughts, feedback, or insight on my setup/ideas above from those more experienced in Sonic Pi and music production in general!