Exploring “Modes” of Pitch-Class Sets Using `chord_invert`

Really good point. Was thinking about something similar though not really thinking about a breakpoint. Almost like mapping different sounds to different MIDI notes… :thinking:

I’m sure Sam has some testimonials. What might be missing, among those who never tried Sonic Pi, is the experience of exploring from that point on.
In fact, I’ve spoken with some people who’ve gone through the tutorial on their own and were explaining that it didn’t stick with them. Based on those conversations, I have a hunch as to what happens with SPi, sometimes. They get to it with a different mindset from the one which works so well with that musicking environment. Having observed it with teachers during workshops, it’s this kind of “I need a better rationale for learning this… instead of something else”.
So, testimonials might not be enough.

As for Galper, I do get what’s interesting, here. It does remind me of a lot of what I notice on YouTube. And it does fit with the “theory” part of “music theory”. My instinct as a researcher is to test the hypotheses contained… and possibly check for biases. Thankfully, some of those YouTubers have been addressing some of these things.

Last night, I spoke with the same music student (and barista) who cued me into Set Theory. He have me several suggestions, including about Henry Threadgill, Steve Coleman, and other Pi Recordings artists. It does sound like those might come closer to the exploratory approach I might want to take.

Good to know! Thanks for the headsup. (It rings a bell. Have yet to get into it. Might also relate to the thread about Ziffers.)

Fair. Part of the reason I still want to try using MIDI as a basis is that it’s in practical use by large numbers of people (even when they sadly call themselves “MIDIots”, they typically use MIDI without noticing).
Another part relates to what you’re saying: I find it easier to make the point that they’re relative values by bringing them back to MIDI. Part of what I have in mind is quite similar to the transposition functions in Sonic Pi or even variations we can apply to pitch (for instance using different tunings). We always use a reference and it’s useful to realize that this reference shifts in diverse ways. The advantage of having a shared reference (such as MIDI’s 0–127 assignments for notes) is that we can make those shifts very explicit.

Besides, I plan to use a number of “piano roll” representations which are equivalent to “folding view”. So, you can easily get an idea of internal movement in a cell without associating individual notes to their pitches. That’s actually the part which is closer to Rousseau’s points (in 1748) about numbered notation.

Exactly. The fact that C-1 can mean either B1 or C-1 (MIDI notes 23 and 0) is really strange. Haven’t yet noticed how they notate the minor triad but C-7 would be quite legitimate as chord notations yet clash with the note B7.
I did eventually find out that you can display ‘♭’ and ‘♯’ by using .unicodeName (or .unicodeNameWithOctave). Haven’t found a way to have those same characters as input.

Yeah, I think we’re on to something. And, yes, IRCAM folks have often wondered about similar things.
Which gives me flashbacks from my college days in music school as well as my university years hanging out in the Faculty of Music. Especially conversations with a friend of mine from those days. (The music student/barista is his son.)
As Francophones, my friend and I had a lot to say about IRCAM. Not all positive, especially at the time. Still, we got echoes of research going on there and would encounter people who’d work there for some time. Eventually, when Philip Tagg came to Montreal, it was after he had done some with IRCAM, if I remember correctly.

True. Which is something I do plan to do. At the same time, I get the impression that some of these functions might be easily accessible in Python as well. (And while I realize that the music21 is really about the analysis side of thing, it does sound like we can leverage it to create patterns as well.)

Interesting. I don’t recall encountering that.

As a bilingual ethnomusicologist, let me just say that this phrase captures something which has been on mind for decades. Including before I heard about Kingsbury.
http://tupress.temple.edu/book/3357

Hm… :thinking:
My approach is more about the co-design process. I’m not making decisions. I’m exploring possibilities. I’ll eventually learn enough that I’ll be able to ideate, prototype, and test with others (particularly with learners whose understanding of music varies a lot). Especially since so much of the supposed advantages of existing approaches go unchallenged, at least in practice.

And I don’t perceive that many advantages in keeping enharmonic note naming. I sure understand why it exists. In a specific tradition. (Including the fact that the two notes need not sound the same.) I find that the distinction has outlived its purpose and confuses people more than anything.

Going back to musical applications of Set Theory. Though pitch classes explicitly remove the distinction between enharmonics, there’s no reason not to add them back if we do get different notes. Then, though, they cease to be “the same note on the piano” and there are other ways to name or represent them. There are also other ways to treat them, which need not refer to a preset functional system. A lot of people use Live without thinking through notes. After all, it’s a big looping sampler. So, a sample of a whole melodic pattern that you put on the “A♭” pad on your controller isn’t playing that note. You’re playing with the sound itself.
What I find more important about Live 11 is that Ableton finally made peace with MPE. Hopefully, that augurs well for MIDI 2.0 (which was officially adopted in January 2020, let’s not forget). Now, that’s opening up real possibilities! Yes, “microtuning”. Also having multiple ways to play the same note. A lot of data that you can transmit between a DAW and other devices (including software ones).

DAWs have limitations, sure. DAWfree musicking is indeed freeing. The reason I’m interested in locating a learning process in a DAW instead of notation software is that it’s already happening. Meeting learners where they learn, supporting them in their learning pathways.
Plus, DAWs are incredibly flexible, nowadays. Including freeware, Free Software, and Freemium ones.

Which isn’t taking anything away from Sonic Pi, of course. Quite the contrary! I find that there’s a stronger connection to make between DAW musickers and this whole approach to learning (including Sonic Pi) than between scorers and Sonic Pi on its own.

Nope. Thanks for the tip.
His book’s title sounded intriguing. And there’s nerdy stuff in there. Not sure it provides stuff we don’t have elsewhere.

Ha, funny Kingsbury’s book is in my bookcase, one row above Brinkman!

I had a look at Brinkman and you need to read chapter 6. That’s the one most specifically applicable to your scenario. He goes through everything from pitch class sets, pitch codes (pc), continuous pitch codes (cpc), highlights issues with all of them and then discusses Binomial Representation (BR) and Continuous Binomial Representation (CBR). The latter is probably the one best applicable to your requirements.

Don’t get me wrong, MIDI is the right way to go, I’m not trying to put you off. It’s more that music21 peeps, lay the ‘blame’ on Midi’s deficiency, rather than implement Brinkman’s code.

IRCAM are funny. Miller Puckette has never quite forgiven them for commercialising MAX.

Yes, Python does seem to have these functions. SPi has the edge in real time interaction. So much so, I’ll be attempting to teach it to a bunch of technophobic singers!

Partimento, along with some other ideas like the Rule of the Octave, reveal the improvised core that got hammered out of classical music from the 19thC onwards. Leading to specific tasks by specific classes of musician i.e. instrumentalist, organ players who improvise, and composers who write the material and might play an instrument(s) and/or improvise.

Some people do challenge the practices. You need to work in an institution built around Logic and Pro Tools and then talk to them about Ableton. It’s always funny how quite liberal minded individuals can become vehemently opposed to tools of expression, or other ways of achieving similar aims!

I see Ableton as way more than a looping sampler. It enables an expansion of the material forms of what could be considered music. Yes, to all of this regarding MPE. There are many more possibilities now that it’s integrated with Live. Combined with follow actions and various other randomising functions, the number of possibilities has increased. The pad thing is also true of the keyboard. The connection between pressing pad/key and what comes out and their correspondence has been disrupted.

Yes, Pyknon book is possibly worth reading, if only from the ‘music as viewed by a programmer for other programmers’.

Think you might like this video with Chris Ford. Different programming language but interesting none the less.

Another library to consider, though not in Python. David Huron’s Humdrum. You might be able to ‘mine’ it for some more info on Pitch Sets etc. i

I remember that as well. Served as subtext when I listened to Darwin Grosse’s MSP interview.

Precisely!
Part of the reason I’m thinking about Python, though, is “portability”. Which then makes me think about JS, including p5js.
That’s my own rabbithole, this weekend.

Yup. That’s pretty much been our job, in ethnomusicology. Thing is, relatively few of us dig deep in tech.
Simha Arom’s team (mentioned in the description to Chris Ford’s video) has been the exception. He’s surrounded himself with coders and/or people who became coders through this work.
French ethnomusicology tends to be quite turf-based. People who work with Arom don’t talk with those from other French ethnomusicologists. Not being French is an advantage in that I can go avoid those walls.
Granted, it’s been a while since I’ve delved into this. It might be an occasion to do so.
So, thanks for the Ford link.

This sounds useful:
https://www.ctm-festival.de/festival-2021/programme/schedule-jan/event/event/dismantling-western-bias-in-music-software-and-music-education

(Shared through Twitter. Related to Khyam Allami’s work on Leimma and Apotome. What I’m expecting, is that it’s less about exoticizing and more about decolonizing.)

Of course. I mostly meant to describe the fundamental model behind it.
As for gen functions, we have quite a bit of that in Bitwig Studio (without requiring a superduper premium license à la Live Suite).

Right. Thanks for the tip. Had come across it. What I find most interesting (and didn’t realize at first) is how UNIX-y it is. Precisely not what I need for the learning material itself. Potentially rather useful while building it. The idea would be, then, that I’d pipe things together to produce the files needed, instead of generating them on the fly on the client side.

So…
At this point of my exploration, I do have tools to find modes and subsets of pitch-class sets. In fact, this type of work is so closely related to basic computing that there’s a large variety of ways to do it. Basically, any (Turing-complete) programming language can do, often with convenient shortcuts.
As a result, it’s a neat opportunity to prototype in those diverse languages, adapting the basic routines to any context. That becomes a worthy sideproject. Which might come in handy for the “helping coders to understand creative processes” part of Inclusive Musical Learning.

At this point, I should probably focus more on finding ways to build interactive modules which play satisfying sounds while displaying some visual representation. My dream would be to have those modules as “widgets” in EPUB3 eBooks. At the very least, I should be able to integrate them in PressBooks.

All of which brings me further from Sonic Pi, unfortunately. I’ll probably come back to SPi at some other stages, for instance to build text-based tutorials where learners can apply things in by pasting the code in SPi. There are bridges between SPi and other things, of course. That might still be too complicated for my needs.

Again, this is all a learning process for me before I can go through steps of design thinking with (other) learners. The ideal situation would be that a group of people with varied competencies in music (including a significant proportion of people who’ve never played a note) would come up with a rough “paper prototype” of what’s needed to accommodate diverse learning needs (across barriers like language, physical abilities, cultural context, learning abilities, gender, etc.). Then, I’d have enough leads on what’s needed on the technical side to contribute to a working prototype.

Yes, tall order. I’m taking my time.

In the meantime, I’ll probably go back to playing with these “things” using the tools with which I’ve been having fun. Unusual chord progressions, semigenerative composition, windcontroller exercises, etc.

Thanks for the links will have a look.

Simha Arom was on the panel for SEM in Paris this year. I managed to attend for one day but work pressures meant I couldn’t make the second.

Interactive modules is a good way to progress. Interaction in general is something that software promises but still hasn’t been able to charge through. One of the reasons SPi works is that it deals with text. The challenge is how to take that text to build a rich sonic world, capable of satisfying an almost limitless amount of expressive mediums.

Looking forward to hearing what you come up with, further down the line.

Until next time!

Oh? SEM as the Society for Ethnomusicology? Thought they were hosted by Ottawa, last time.

~And are you in Paris?~ OIC, Westminster.

Thinking about Arom et al. makes me want to reconnect with ethnomusicologists in Paris. As mentioned, it’s been a while. I’m in touch with a few ethnomusicologists here in Montreal, including some who’ve done part of their work in France. Yet my path has led me to other types of work.

It’d also be interesting to connect with French-speaking learners and teachers who use Sonic Pi. It could really help me connect this longterm project with my current dayjob.

1 Like

Ha! Just found out about this approach to DAW-based learning (from the maker of Syntorial).

I realize it’s off-topic in Sonic Pi terms (“out-thread”?). Putting it here because it relates to some things @Hussein and I were discussing.

It’s also worth considering how this type of interaction would fit with SPi. Could we create stuff where learners would try to emulate something they hear and get feedback on what was off?

I’m pretty sure I’ve mentioned Syntorial in the past. It’s that interactive model, applied to subtractive synthesis. You hear a synth sound and you try to reproduce it by tweaking the pulse width modulation or the filter cutoff. I found the approach really clever and I’m dreaming of ways to apply it to “composition”.
Turns out, Audible Genius beat me to it. :wink:
(Wasn’t in any hurry.)

Thing is, AG’s system is proprietary. I’d like to build one which would fit better with my work in Open Education.

I’ll purchase these two lessons from AG and reflect further upon the experience.

As surmised “in-thread”, Sonic Pi might not fit so well. Still, it’s a great environment to prototype this stuff. And maybe create text-based tutorials for use in parallel with the app.

1 Like

I like Syntorial. My children like using it as well. The middle one likes making Beats as well, and we go through Ethan Hein’s spreadsheet of classic grooves. I try to get him to make the same patterns but using different environments such as Ableton and Garageband. His school uses a different web based environment for music during lockdown, so I’m hoping the different interfaces start to ‘feel’ natural. We also listen to the original beats in context, and once we’ve done this, I get him to vary the patterns, by changing parts, adding/subtracting/splitting/transposing etc.

Yes, SEM, I’m on their SIG on decolonisation. I’m probably just a bit too much on the commercial music side of the equation, though I find all of the discussions and readings informative.

There’s a much older Audio ear training book using a MAX/MSP library for matching filters/resonances/boosts and cuts for engineers. And there’s an app called Quiztones.

I think the interesting thing about DAWs is that despite their being around for quite while, there isn’t really a ‘theory’ per se. There can at times be a quite wide methodological gap between correct tool use and art. For example Burial’s Untrue is not made in the classic DAW, it’s made in Soundforge and is probably one of the most influential electronic albums of the past 15-20 years. Yet, as an approach, will not be featured much as a ‘valid’ creative process in teaching materials the world over due, in some ways, to non-conformism i.e. not using Logic or Pro Tools. So an Anti-Aestheticist approach, as outlined by Osborne’s schema (2013), is quite difficult for standard DAW teaching to achieve. This is also one of the reasons why I think SPi encourages a bit of the spirit of adventure!

Anyway, all good stuff. Be interested to read your observations of the Building Blocks bundle.

Onwards and upwards!

1 Like