Introduction - Alex Enkerli

As usual, @samaaron, you demonstrate your understanding of human beings as well as computers. Such a nice combination! (And relatively rare.)

Probably said some of this in the Google Group, but it’s a nice occasion to introduce myself in a new way.

And this will obviously be a lot more than what you want. It’s also easy to skip.

Got into Sonic Pi because of Raspberry Pi. Got into RasPi in 2015 because of my then-colleague Christophe Reverd. We were both working as technopedagogical advisors for Québec’s Cégep system. (That job ended for me a year ago but Christophe is still rocking it.) Christophe’s electronics background meshed really well with all things Pi. He went on to build Club framboise, which already has extensions in Cameroon and elsewhere.

In my own case, getting into Raspberry Pi had less to do with electronics and more to do with Free Software, Open Educational Resources, and music. Which is where my musical background might be relevant.

Started playing saxophone in school at age 11 (in 1983). Had had limited musicking opportunities before that but did get to use an electric piano and a few piano lessons from my aunt Rina. This part never had much of an impact on me but listening more carefully to some EP sounds does bring back some memories.

My family’s musical tastes have always been quite diverse, especially by today’s standards. It included traditional Québécois music, Prog Rock, Brazilian Popular Music, Jazz, French chanson, Baroque, Barbados steel drum music, and some more mainstream stuff like Dire Straits or, in my brother’s case, a short but intense Elvis Presley phase.

As for technology, my first experiences were with videogames, particularly a neighbour’s Colecovision and my cousins’ Intellivision. Never got that deep into games but playing those did have an effect. Got a ViC-20 in 1984 or so. Dabbled in BASIC and got to play some simple but fun games at the time. In 1987, worked on some lab reports with a friend on his family’s Mac SE. My father then got a used Mac Plus which got more of my attention than his. Used to spend quite a bit of time in DMCS, the Deluxe Music Construction Set. Also played a few games, dabbled in HyperCard, got used to the basic affordances of computing at the time.

In 1989, went to Cégep in music (on my way to studying anthropology at university). Classical sax was my official focus, in both private practice and quartet, but spent a whole lot of time in Big Band and some time in the MIDI studio.

That studio was really formative in several key ways. Realized some of that at the time but it’s even more obvious in retrospect. One is that it combined my music and technology interests in an obvious but profound way. This was the time at which MIDI was really becoming a thing. The station in that studio, for first year students, had a DX-7 plugged into a Mac Plus running MOTU’s Performer along with some other things (possibly Laurie Spiegel’s Music Mouse and Ziccarelli’s M, though those might have been present at later times). We probably had some kind of patch editor for the DX-7. Much of our time was devoted to editing patches. Don’t remember that much about FM synthesis, but it sure was an ear opener.

Another thing the MIDI studio had going for it, and something which would then have a big impact on my life is a distinct approach to pedagogy based on a vague analogy to the Samba school. The basic idea was that we would all work on our own projects but we’d get opportunities to collaborate with others, across spheres of expertise. Some of the people working the MIDI studio were really dedicated to it and the peer-teaching opportunities were tremendous. At least two of my friends from that studio have gone on to become electroacoustic composers and then have illustrious careers in art and technology.

After Cégep, kept doing computer-based stuff and remained interested in computer music (thanks in part to these aforementioned friends studying electroacoustic composition). But my musicking was mostly limited to playing sax in Big Band or classical quartet. Took a number of courses in the Faculty of Music but focused on ethnomusicology, not music creation or performance. Was very interested in computer-based analysis, including pitch-tracking. Did take two courses in acoustics (taught by an electroacoustic composer and TAed by one my friends from Cégep). Those did turn out to be quite relevant for my future.

During this time, networking became more of a thing. Got a modem in mid-1993 and started accessing my local BBS where we would discuss all sorts of things including music and music software. Within a couple of months, got an account from the university allowing me to access the Internet, from computer labs or by modem. Was hooked instantly.

That same semester (Fall 1993), near the end of my B.Sc. in anthropology, attended some lectures by ethnomusicologist Marina Roseman who showed some results from sound analysis program Signalyze. After getting the reference from her (and discussing the possibility of going to graduate school at her department), went to Gopher to check out Signalyze. Turns out that it was developed in Lausanne, Switzerland, where my father was born. The lab was doing speech analysis and my academic focus was in linguistic anthropology. Had talked with a cousin in another part of Switzerland about spending the following Summer there, right after graduation. On a whim, told Signalyze developer Eric Keller about my intention to spend time in CH and mentioned that it might interest me to collaborate with them if they needed someone with an ethnomusicology background with a bit of expertise in acoustics. Was hired right away at a higher salary than my university professors were making. That Summer job extended into a total of fifteen months. Spent most of that time (75 hours a week, in the beginning, 42 hours a week later on) doing segmentation of speech signals.

Upon coming back to Canada, did my M.Sc. on Malian hunters’ music while extending my musicking and computing activities. Then went to Indiana University for a PhD in ethnomusicology.

Kept meaning to do more with music on computers but was never finding the right “formula”. But a few devices, like a Yamaha WX-11 wind controller, a Korg Poly-800, and a Yamaha TX-81Z. But none of it worked the right way for me. Nothing really gelled.

For some reason, my playing the WX-11 generated a whole lot of flubs, which was always frustrating enough to make me give up. With strong envelopes attacked every note, it’s particularly depressing. But it was annoying enough in legato mode that it didn’t feel like it was worth the effort. Got the TX81z because some people had been using it with a wind controller. It was one of the rare synths which did something with CC#2 (breath control). But, for this setup to work, significant tweaking had to be done. When you just want to pick up an instrument and play, it’s no fun.

Part of my problem with the Poly-800 was the lack of velocity. Not that my rudimentary piano skills would have made for great performance on a velocity-sensitive keyboard. But it wasn’t that satisfying. Did play a bit making patches without knowing much about subtractive synthesis. But, at the time, it didn’t sound very good to me.

So these gadgets collected a lot of dust while even my alto sax got a decreasing amount of playtime.

On the software side, things weren’t much brighter. Tried diverse things, over the years. For instance, experimented with all the software available on university computers including early (MIDI-only) versions of Max and some notation programs like Finale. Still have fond memories of Music Mouse, Turbosynth, Passport Alchemy, Band-in-a-Box… But none of it became part of my own work on my own computers. Several times over, tried to get into Csound. A friend eventually developed a GUI frontend which did make things easier, but the whole thing never worked the way my brain works. Eventually checked out things like Overtone, Tidal, SuperCollider, ChucK, and Pure data, but none of these got me over the hump. The Pd part may sound surprising, as it’s deemed intuitive. But it relies on a workflow which differs from the way my ears and brain work.
Also tried “trackers” but they never made sense to me. Even tried doing things on PalmOS, with Bhajis Loops and such. Still wasn’t finding what would work for me.

There clearly was a lot of potential in music technology and it kept interesting me as a listener, going to performances and occasionally listening to recorded music done with computers. But it still wasn’t grabbing me as a performer.

Then came iOS. ThumbJam was among the first pieces of music software which “did it”, for me. Spent quite a bit of time noodling away with it. Sure, this was very self-indulgent and none of it could make its way into serious performance. But, as my ethnomusicology studies had taught me, there’s a whole lot to be said for casual musicking. Playing with music is an underrated activity.

Also had quite a bit of fun with other iOS apps, over the years. Loopy, Gyrosynth, Mugician, SoundPrism, RjDj… “Now we’re talking!” Wasn’t necessarily spending a whole lot of time with those, but it did feel like “going somewhere”. Those were less keyboard-centric than other approaches to music on a computer. They were also less tied to a programming structure. For some reason, it started to dawn on me that my musical sensitivities could merge with some of these tools. (Also had a bit of fun with pseudo-DAWs like Nanostudio and GarageBand. But it wasn’t the kind of visceral pleasure ThumbJam might give me.)

At some point, was able to integrate my WX-11 wind controller in that setup, especially with ThumbJam. The first key to this was scale-fitting. In some ways, it partly solved the problems with flubs. But it also satisfied some of my sensitivities as an ethnomusicologist. The melodic possibilities of many scales outside of the major/minor system were quite compelling to me. In fact, much of my private improvisation on the saxophone has been about playing with playing my own quirky scale structures. The approach could have come from Jazz but my Jazz improvisation skills also never really materialized in a way to satisfy me. Long understood many of the concepts behind chord progressions and scales. But it’s always been hard for me to follow a quick succession of chords implying different scales. Even playing diatonically hasn’t really been my forte, maybe because my tendency isn’t to resolve to chord tones. In other words, mainstream ways of improvising with scales would have required more practice on my part. So ThumbJam’s scale fitting worked for me in part because it was easy to play interesting melodies which didn’t depend on another structure.
What also made ThumbJam work for me with the WX-11 is the fact that it can use breath control for volume and/or filter. This makes a huge difference in terms of expression. And my playing technique eventually improved enough that flubs were becoming rare.

By 2012, my iOS-based musicking was becoming quite satisfying. At some point, participated in a workshop led by a bansuri player. Learning Indian classical music on a saxophone wasn’t that easy, but it did open something up, for me. And, almost by coincidence, it introduced me to the Samvada app. Not only has it been fun to play with this app but it gave me a significant boost in my ability to improvise.
During that same period, my Jazz improvisation skills also improved a bit, thanks to apps like iReal Pro and the iImprov series. Still not up to my standards, but it broke the spell, for me.

So, it all brings us back to 2015 and the Raspberry Pi. Was still unclear about the difference between microcontrollers (Arduino and such) and single-board computers like RasPi. Had heard of the Kano and thought that a Pi required all sorts of other things to plug into screens and keyboards and mice. Was a bit unclear as to why an inexpensive computer would work for me. But my colleague’s enthusiasm was contagious and something made me decide to try it. Pretty sure the musicking affordances were part of it, including my desire to eventually build my own type of wind controller or, at least, some way to make it work in a more satisfying way.

Can’t remember very precisely how my initial experience with Sonic Pi went, but getting through the embedded tutorial was a deep experience. Often been on the record as to this tutorial’s pedagogical value. Instead of merely telling you what steps to take to achieve a given result, Sam asks you questions, gets you to investigate why a certain result differs from what you expected, makes you listen to subtle or more subtle differences in sound and, more importantly, encourages you to freely experiment with the code without fear of making mistakes.
Wow.

Suddenly, all sorts of things started making sense. Despite courses in acoustics and some limited experiments with the Poly-800, subtractive synthesis never really appealed to me and my brain never really grasped what was going on. Playing with the cutoff: opt made it real. Interestingly enough, it turns out that this is also the key to playing with a wind controller. Assigning breath control to the cutoff frequency of a low-pass filter means that no sound is heard until you blow and the sound becomes increasingly bright or harsh as you blow harder. As simple (and as commonly-understood by sound designers) as this basic principle is, it’s just… deep.

In parallel with the tutorial, started experimenting with some of the embedded examples. Eventually started writing my own little scripts. And extending them. Combining them with other things. Those were all about generating music, often with some level of “restrained randomness”. One of Sonic Pi’s great strengths, among many, is the ability to do meaningful stuff with random phenomena. The way rings work is simply brilliant, and they make for really cool effects when used with .choose or .shuffle. (Though these things probably emerged after my first Sonic Pi experiments; can’t remember the precise timeline.)

Thanks to my renewed confidence and my concrete understanding of the effects of some basic procedures, it was then relatively easy to get into some of the software which had been so foreign to me for so many years. Can’t remember the precise order but got through tutorials and other learning experiences with ChucK, SuperCollider, Processing, and Pure data. Also did a couple of music-related things in Python, which were also easier to do thanks to Sonic Pi. Even getting into Ableton Live (a Lite licence comes bundled with iOS app Korg Gadget) made more sense thanks to Sonic Pi. So, among other things, Sonic Pi opened me to several other approaches to Digital Musicking.

Wrote an academic article (my first in… forever) about the connection between digital making and musicking. Sonic Pi was a big part of the background.

Organized some workshops and presentations around Sonic Pi. With teachers, with Raspberry Pi enthusiasts, with Cégep students, with kids… In my experience, the people who are most reluctant to get into Sonic Pi are those who “know too much” about ways to do music with computers. This includes people with deep and advanced expertise in computer music. But it’s also about teachers who know that you can play with sounds in Scratch and don’t perceive the value of learning a Ruby dialect with their students.

Never did any real livecoding with Sonic Pi. To be honest, it never really attracted me. Typing commands and numbers in realtime doesn’t sound like much fun, to me. Especially not in public. My typing skills aren’t that bad but, for some reason, the rate of my typos dramatically increases while typing in front of my students.

What did attract my attention, however, is other forms of realtime control. As soon as there were rumours of ways to control Sonic Pi through Python, OSC, audio, or MIDI, my ear perked up. Started imagining an instrument around Sonic Pi. Thought of ways to use TouchOSC to play with SPi synths. Dreamt of the day my WX-11 could drive complex harmonies in Sonic Pi. Became kind of impatient about it. But it was really gnawing at me. And made me spend quite a bit of time trying to figure out the best path forward, both in terms of the code required and the hardware setup.

Have yet to build my own instrument but did succeed with OSC and MIDI. With beta versions of what became Sonic Pi 3, was able to achieve the “Michael Brecker Effect”, playing rotating chords with monophonic input. Had tried patching this in Pure data but it never worked. Did it quickly in Sonic Pi and the effect was really satisfying. Also did SP scripts which generated countermelodies from monophonic input, with scale-fitting and multiple audio effects. Spent a number of hours playing with these scripts, feeling this form of visceral pleasure which only comes from the most appropriate musicking sessions. Even more self-indulgent than any other type of noodling or shredding. But… ahhhhh!

My plans to build my ideal instrument have been on hold for a specific reason: bought the next best thing.

The key to this is MPE: Multidimensional Polyphonic Expression. It’s still emerging as a standard, but it’s exactly the thing for me. Simply put, it allows for each note to have its own control change, pressure, and pitchbend. It’s the stuff which helps people like Geert Bevin, Roger Linn, Jordan Rudess, and Lippold Haken to do some really fun stuff.

When ROLI announced the Seaboard Block, at the end of last Spring, it really got me thinking. With breath control, it would instantly become my ideal instrument. My specs are:

  • Breath controlled
  • Not a keyboard
  • Portable
  • Polyphonic
  • Immensely expressive (i.e., full MPE support)
  • Easy to play
  • Reasonably priced
  • Easy to pair with iOS

The Seaboard Block doesn’t fill the first two specs, but it got everything else figured out. Bought a Lightpad Block to try one of ROLI’s controllers and could really appreciate the iOS integration. The main problem with that device, though, is that it’s fairly hard to play, including in terms of the physical pressure to exert on the surface; it literally causes strain in my hand. But it did give me an idea of what a ROLI device could do. Including with MPE-enabled synths like Equator. (ROLI just came out with an updated version of the Lightpad hardware and an update to the related iOS app. Somewhat surprisingly, they also gave owners of the original Lightpad a license to Tracktion’s Waveform DAW. Had tried this one before but was rebuffed. Got more into it yesterday and it finally starts to make sense. This is one of the rare pieces of commercial software to also run on Raspberry Pi. Have yet to try it there but it’s becoming more promising, especially given its support for MPE.

Before the Seaboard Block was released, ended up buying an instrument which does fill the first spec above (breath control) but not the last one (easy iOS pairing). Nor, arguably, does it fulfill the reasonable expense requirement. But it still ends up being the next best thing from my dream instrument: the Eigenharp Pico.

This device’s level of expressiveness is simply a game changer for me. The clarinet model which comes with it is remarkably satisfying. The scale control gives me the same type of joy as with ThumbJam. And, as expected, breath control really is a requirement for me.

The main problem with Eigenharps is that they require a computer to send MIDI (or OSC) messages. Unlike my WX-11, for instance, one cannot simply plug an Eigenharp into an iOS device or a hardware synthesizer. What’s more, its primary support is for macOS. Used it with a Lenovo laptop running Windows 10 and, though it worked, it wasn’t an optimal experience (in part because of the difficulty with inter-app MIDI communication on Windows).

Thankfully, though, there are ways to use a Raspberry Pi or a Bela to do the conversion into MIDI. Have yet to make that work, but it does keep my dream alive. With Sonic Pi as the sound generator on a Raspberry Pi (with the Blokes pisound HAT?), it’d be possible for me to have a very compact setup to bring my Eigenharp Pico to jam sessions and even to outdoors venues. It might require one Raspberry Pi for the conversion and another one for Sonic Pi, but that’s still more compact than a laptop.

Recently got an old MacBook Pro, for use in my musicking projects. Decreases the importance of having a low-latency Raspberry Pi (or Bela) setup. It also means that my focus has been a bit more on macOS software. Sonic Pi has an important place, there, but there are several other things which take up my time on the Mac. Including the aforementioned Waveform DAW.

So… Not saying that my Sonic Pi plan are on hold, at this point. But, at least for a little while, my priorities have shifted a bit. Will surely come back to SPi very soon, especially after finding a way to bring my harmony-focused scripts to support the Eigenharp Pico. But it requires a special kind of concentration which is hard for me to get, these days, with a dayjob in cybersecurity and teaching applied anthropology.

What has excited me with Sonic Pi during the past two years comes down to discovery, with the deep feelings associated with that. It’s like finding a secret passage to a new land.

Thank you so much, @samaaron!

(Apologies for length.)

7 Likes

Thanks for sharing this. I found it a fascinating read. Might spur me on to write my story too!!

5 Likes

I found this super interesting even though an old article. It highlights the obstacles of real time musicians wanting to do interesting things with their controller skills and sensors, beyond keyboarding. I got a bela and have some q’s about Sonic Pi integration, will post new thread.

You’ve been on a fascinating journey. I, too, am smitten with MPE. I have a Linnstrument, an Erae Touch, a Striso, and 3 Roli Seaboard blocks (6 octaves, and a much cheaper option than the equivalent Seaboard Rise). Oh, and a Chorda, but I don’t play it much. I think I’m going to give the Chorda to my nephew’s kids, as it’s an ideal instrument for naive players.
I’d love to get an Eigenharp, but it’s way way WAY out of my budget. Ditto the Haken Continuum.
Did you know you can get outboard breath controllers for a couple hundred bucks? That and a Roli Seaboard Block will give you a poor man’s Eigenharp (more like an MPE melodica, but still.)

1 Like

Major coveting here @HarryLeBlanc :smiley: I have monophonic main instruments (recorder/low whistle) and love jazz, mediaeval, world folk harmonies. The two interests aren’t really compatible. I wonder if @enkerli has continued on this journey?

Have you tried the outboard breath controller route? Just putting these links here in case anyone interested in diy wind syntherie:
I found this guy making v lowcost breath sensors: https://www.youtube.com/watch?v=qmlhkcaMYX0
Breath controller Arduino: https://www.youtube.com/watch?v=G8MM0KMpOZI

I’ve toyed with the idea of breath controllers, but with my current setup, I can control up to 6 dimensions of sound (including pitch bend), which seems like plenty. Besides, I’m not a wind player, so I think of music in terms of hands, not breath.

1 Like

That sounds like plenty dimensions :slight_smile: I’m a bit limited. Hence trying loopers etc.

[Obviously, a lot has happened since I first introduced myself to this Discourse… Might end up writing more about those changes, here or elsewhere.
Thing is, I’ve recently shifted my coding efforts to Plugdata (and Lua), as it works on the devices I use. I might come back to SPi when Tau5 comes out, depending on my needs for livecoding, at that point.]

At this point, the controllers I primarily use in performance are Intuitive Instruments’s Exquis, Aodyo’s Sylphyo, and moForte’s GeoShred. I still use my Yamaha WX11 at home (and even got it modded to send breath and bite through CV). I also use a Novation Launchpad X in several situations. And I occasionally try things with a Yamaha BC-1 breath controller, paired with other devices.

With both the BC1 and CV-enabled WX11, I become even more convinced of the value of adding breath control to MPE. Not that I would use breath so extensively. Just that it would be part of my live setup. At one point, I would likely stop using a windcontroller altogether, especially if I become proficient enough with grid controllers. (The learning curve on both hex and square grids is way faster for me than on piano-style keyboards… and even with sax-style fingerings, despite playing sax since the mid-1980s.)

Apart from the BC1 and CV from WX11, I haven’t tried other breath controllers on the market. A large part of the reason is that they don’t correspond to what I need, at this point: direct CV and MIDI outs, perhaps with Bluetooth (no, latency isn’t an issue, in my experience).
What could be even better is if the device actually worked as a breath-enabled analog filter. Sure, I could rig something together. Might be fun. I’m not in a DIY phase. And though I have a couple of Eurorack modules, I’d rather have a small, battery-powered integrated device which does the “breath filter” thing.

I’ve mentioned Aodyo’s Sylphyo. Though I’m not as skilled with it as on the WX11 or the sax, it’s the one device I use the most during jams and live sets. And I’ve backed Aodyo’s crowdfunding campaign for the Loom controller. What’s really disappointing is that, just this week, Aodyo has announced that it’s likely to go bankrupt unless they find additional funding. Even if they pull things off, it sounds very unlikely that there’s a bright future for their lineup.

I’ve also mentioned GeoShred. Yes, it’s an app. Originally iPad-only (well, ok, there’s also an iOS version). Now works on macOS. There’s a free version for iPad which works as a controller. And while people often say they don’t like to play on a glass surface, I’m fine with it.

What does any of this have to do with Sonic Pi? Well… Not that much, actually. Sure, I can use the coding environment to create things that I’ll use in live performance with instruments and/or instrument-like controllers. However, the livecoding approach is wholly distinct from the live playing approach that I personally use with those devices. Not incompatible, mind you. Just “different parts of my brain” or different brains altogether.
At the time, I was dreaming of ways to use SPi to create all sorts of things. Those dreams are becoming reality with the crossplatform plugin version of Puredata. And that helps me get further and deeper in a longterm project that I call #MTILT (“Music Technology: Inclusive Learning & Teaching”).