I’m a new contributor to the forum so thanks for having me. I’ve been using Sonic Pi on my Mac for quite a while and most of the time I use it to control a load of external synths through MIDI, both directly and through a USB/MIDI/CV sequencer (the Korg SQ-1). It’s great fun and on the rare occasion I ‘perform’ I can get pretty sweaty twisting knobs and pressing buttons!
I’ve got involved with other musicians and I now find myself up against a wall regarding my latest adventure and my question is this: can I take a live_audio sound input from an external source and convert that sound to data which can ultimately be used to generate or alter variables in the code? One example might be to use singing to change aspects of a with_fx effect such as the wobble.
Any ideas? Thanks.