Hey everyone, here’s a recent interview where I discuss the origins of Sonic Pi in addition to its philosophy and future with the wonderful James Lewis.
Being new to all this, that’s the first time I’ve seen you in action. A pleasure to ‘meet’ you. Great interview!
Finally watched. Couldn’t find it in the podcast feed.
Lots to chew on, as always. Some things you’ve discussed in the past mixed in with new insight on your approach.
As a teacher, I’m struck by the shift which happened when the importance of “learnable programming” became important to you. Wish there were a way to help other CompSci people get closer to that.
There’s a whole thread to pull on Music Education/Instruction. In fact, just yesterday, the New York Times published a piece by a music teacher in the US nominally about changing ways to teach music (archived link; gifted link so it doesn’t count against the newspaper’s paywall). There are several important ideas in this piece, including the importance of social context (I’m a social scientist). What I enjoy less about it is that it focuses on formal instruction with goals around continued instrumental practice. Miller bemoans the fact that learners stop playing their instruments and leave the (school) bands in which they are. While I get why it matters to someone involved in this mode of teaching in “strongly-typed” institutional programs, it’s probably obvious to all of us that it represents a narrow path to musicking… or to learning.
Maybe we can do more work with music teachers?
In a way, it sounds like it’s comparatively easy to get CS types to understand the value of something like Sonic Pi. Still quite difficult. Just a bit easier, maybe. Or, at least, not the same type of difficulty. A teacher in programming probably won’t feel threatened in her professional identity by the very existence of Sonic Pi. In fact, as you say, the original Raspberry Pi launch happened at a time when many teachers in the UK had to start teaching programming so the Pi was a solution to an existing problem. To this day, however, it can be difficult to get decision-makers to understand the value of taking this specific approach to computing, which leverages a programming environment covering all bases from 10yo pupils to professionals and retirees. They’d more likely get kids to do Scratch or use a game-like interface than allow them to write a single like of code like play 60
.
On the Music Ed side, the missed opportunity comes from a whole entanglement of things. Teaching diverse genres is part of that, as is gatekeeping on the industrial side of music consumption. There’s also a whole thing about music ed being centred on staff notation and different affordances of the piano. While there have been many programs to teach music through other means that musical scores, there’s a whole big “but they’ll need to learn to read music eventually” vibe to the whole system.
Among people who describe themselves as non-musicians are quite a few who think about “reading music”. That includes people who create a lot of music. I remember this taxi ride with a DJ friend of mine. When the cab driver asked us if we were musicians, my friend hesitated and eventually said she wasn’t. Even actors/singers in a production of West Side Story, way back when, were talking about musicians as those of us in the pit as opposed to themselves (some even described the whole band as though it were a single individual). In many people’s heads, there’s a distinction between “those who can read sheet music and those who can’t”.
Wonder how that’ll eventually play out with coding like Sonic Pi. Many of us, here, can get a lot by reading a SPi script (especially if the code is well-documented). Yet that’s very different from executing the script and there isn’t a notion that we can interpret the code on our own. Something very interesting is also happening with all sorts of electronic music which is notoriously difficult to transcribe. Yet the DAW project or MIDI file is better documentation than just about any score created by a composer.
So… Going back to this interview…
It’s obvious that our favourite live coding environment still has a bright future in learning contexts. Funding remains an issue though the Patreon+gigs structure makes sense in your case. (Other models exist, including deeper collaboration.)
Since we know that a visual synth is part of the experiments towards v5, there might be an opportunity to bring in teachers in “multimedia” classes. As far as I can tell, those tend to be very different from music instruction, though several learners do end up creating music through these. In fact, I get the impression that teachers are more likely to be ok with experimentation in different languages than some other teachers. Want to use p5js for this part of the project? Go ahead. You have access to TouchDesigner and also want to use Unity? We’ll figure it out. Sonic Pi would be another arrow in the quiver.
In other words, learning experiences in multimedia tend to be quite pragmatic instead of “pure”.
In my own (longterm, informal, self-funded, exploratory) participatory-action field research on inclusive learning through music tech, I’ve left that path mostly unexplored (apart from facilitating a SPi workshop in a multimedia class). It might be time for us to reach out to these people.
At least, that’s something which resonates with discussions I’ve had during Mutek.
Maybe some SPi coders have had similar experiences.