I’m wrestling with an existential question, and I need inputs. My plan is to take Sonic Pi and use it as an instrument outside of an algorave context. I would be performing like any other musician, for an audience that has absolutely no knowledge of code (industrial music, for those interested). The thing is, someone standing at a laptop for an hour isn’t very engaging to look at. There needs to be some interaction between artist and audience if I’m ever going to make people dance. How the heck do I resolve this?
Visuals could be the key, yet I have no idea how to approach those either. Does anyone have any experience in this area?
just look for contact to someone who does visuals. there are lots of people. and i - just speaking for myself - rarely look at the dj, - its my feet that get itchy.
That is an interesting discussion. I have also a live gig coming up (actually a follow up of an earlier one at the same location). But this one will be rather for a listening then a dancing audience (although I really do appreciate the idea of making dance music). Sam likes to stress the idea that standing instead of sitting is paramount. I think there is a lot of truth in it. The more engaged the musician(s) on the stage the greater the chance might be that the audience also engages with the performance.
For me the projection of the code will be important because it will be anounced as ‘live coding’. Nevertheless I think, the music has of course to be enjoyable regardless of the way it will be produced. Projected code might give some interesting insights in the process of creation but it does - in my opinion - not compensate for a lack of musical quality (granted that, what ‘musical quality’ is is highly arguable).
Last but not least, there is a similar discussion going on at lines, so if you interested, have a look.
PS.: I am also thinking of visuals, but currently I do not want to sacrifice cpu power which I do need for the music. So in the midterm something like ETC based on its own hardware might be an option.