Visual/camera input to generate code/music?

Hi! I just stumbled onto this and it looked close to what I’ve been searching for, some kind of way to generate music or at least record data that can then be easily converted to music, from camera input, for my actual idea of having nature settings generate their own music so to speak just from the camera input and code. Though I’ve messed around with raspberry pi and arduino etc doing various projects/tutorials, I’m really more of a copy/paster and not very technically knowledgable codewise etc at all! So I’m wondering if anyone has approached this concept at all with Sonic Pi?

I was otherwise looking at SenseCAP’s AI gadgets which can use cameras to do quite great things but aren’t so much geared towards art/creative work (or well otherwise maybe people just haven’t used it so much for that yet). Any thoughts/ideas/links would be hugely appreciated! Thanks. :smile:

If you know how to get what you want from the camera you can send commands and data to Sonic Pi with OSC. That’s easy to implement with a few lines of Python.

1 Like

Hi @R2L - Thanks so much! I was reading the documentation earlier noticed the abbreviation, I’d never come across OSC before but this sounds like it could be quite ideal, just the technical coding side of things to try to learn/figure out, thanks!