Exploring “Modes” of Pitch-Class Sets Using `chord_invert`

Oh? How does that work?
Is it one of 'em to_ methods? Didn’t work with to_s, to_a, or to_i.

Been trying to do some ring operations on those (say, .reverse) and I’m getting undefined method for PCSet.

I haven’t, no. And I have zero knowledge of Pascal. :sweat:

I don’t know Pascal either but I used it to help me create these sorts of operations back when I used to use Macromedia Director. Didn’t do too badly converting the ideas from Pascal to Lingo. Things improved when Lingo got dot syntax and you could create objects etc.

It’s really how he goes about encoding pitch sets using code. As there are pitch sets already known/encoded, such as with Forte amongst others, isn’t it just a case of making a large 2d of rings/arrays and then randomly pulling one from the 2d set of pitch sets?

I meant literal in that it referred to specific note positions within a chromatic scale, whereas your implementation relies upon distances between note position. Essentially this is a semantic difference.

The ring chain command is on the line pat = (chord_invert pOut.sort, 5+tick) before you hand it off to play_pattern_timed. For example:

p1 = (chord :G2, '7-5')
p2 = (chord_degree :i, :Ab2, :major, 2)

pOut = p1 + p2 

live_loop :setmodes do
  use_transpose 0
   pat = (chord_invert pOut.sort.mirror.take(3), 5+tick)
  play_pattern_timed pat, [0.25,0.125], release: 0.2
end

Change the number for take to produce different length lines etc, or remove mirror and replace with something else. There some interesting outputs possible.

Hope that helps.

That’s the approach I was thinking of taking, in whichever language. At the very least, to associate each set with common labels (“Mu chord”, “Blues scale”…).
One issue I tend to have with most approaches to PCSs is that only one prime form has its own Forte number (its inversions isn’t distinct). I like the Wikipedia approach (which uses normal form, technically).

I think I get that. And I’d like to merge the two approaches.
Interestingly, scale.rb defines scales by sequences of intervals… and the PCSet uses intervals internally.

Ha! I think I can adapt that to my needs.

I’m still a long way from being able to identify subsets/supersets… but I do have example code here and there which helps.
I really like this one, as I find the code legible at my level.
https://www.mta.ca/pc-set/calculator/pc_calculate.html

Ha! I just solved my issue with NoMethodError !
It was, unsurprisingly, very simple. The actual array of notes is in .pitches.
So, if I want to use those convenient methods for working with arrays, I need to address those pitches directly.

run_file "/Users/alex/Documents/GitHub/Ruby-PCSet/pcset.rb"
noodl = PCSet.new([0,1,4,5,6,7,10])
puts noodl.invert ## Inverts and keeps the sequence from high to low
#puts noodl.invert.sort ## Throws a NoMethodError
puts noodl.invert.pitches.sort ## Works as expected

Kinda like the .notes method from PCset. Wonder if I could call that method directly from an arbitrary array…
Sounds easy enough, to be honest…

  def notes(middle_c = 0)
    noteArray = ['C','D♭','D','E♭','E','F','G♭','G','A♭','A','B♭','B'] ## Converting to flats with the actual sign
    if @base != 12 then raise StandardError, "PCSet.notes only makes sense for mod 12 pcsets", caller end
    out_string = String.new
    transpose(-middle_c).pitches.each do |p|
      out_string += noteArray[p] + ", "
    end
    out_string.chop.chop
  end
  def transpose(interval)
    PCSet.new @pitches.map {|x| (x + interval) % @base}
  end

EDIT:
Ah, yes… Somewhat clunky. Rather useful, I find.
Reconvert an array to a PCSet and address .notes of that array.

run_file "/Users/alex/Documents/GitHub/Ruby-PCSet/pcset.rb"
noodl = PCSet.new([0,1,4,5,6,7,10])
puts noodl.prime # Results: [0, 1, 2, 3, 6, 7, 9]
puts noodl.invert.prime # Results: [0, 1, 2, 3, 6, 7, 9]
puts noodl.normal_form.zero # Results: [0, 1, 2, 3, 6, 8, 9]
puts noodl.invert.normal_form.zero # Results: [0, 1, 2, 3, 6, 7, 9]
puts noodl.transpose(7).notes # Results: "G, A♭, B, C, D♭, D, F"
puts PCSet.new(noodl.invert.pitches.sort).transpose(7).notes # Results: "G, A, C, D♭, D, E♭, G♭"

Which really makes it obvious that the two scales are quite different. It’s the exact same prime form for both ([0, 1, 2, 3, 6, 7, 9]). The normal_form at zero for my input is slightly different ([0, 1, 2, 3, 6, 8, 9]). The resulting G scale is really different.

1 Like

Sounds like you’re really making progress. It’s great working through these issues and coming out the other side!

Looking forward to more.

1 Like

Of course, finding all subsets is easy… when you know where to look.

Was focusing on pitch-class sets. Doing a search for “ruby finding subsets in a set” I got there:

And, yes, the .combination method does work.

run_file "/Users/alex/Documents/GitHub/Ruby-PCSet/pcset.rb"
noodl = PCSet.new([0,1,4,5,6,7,10])
noodlsubs = []

for i in 0..(noodl.length) do
    noodlsubs = noodlsubs + noodl.pitches.combination(i).to_a
  end
  
  puts noodlsubs

Of course, I now need to cull the results. And/or categorize them. If multiple combinations are tightly related (like modes of symmetrical scales, say), I pretty much need to know that. There again, Set Theory should come to the rescue. Getting primes for each combination will be helpful, I guess. Then, grouping subsets sharing the same prime… and somehow comparing them.
Once I have a list of unique combinations, I can process each of them as a PCSet and, for instance, output the notes.

Better yet, maybe there’s a way to output a MIDI file with all of them? Unlike the rest of my journey, this is one which is more about SPi itself than about Ruby.
I know I can output MIDI and there might be a command-line option to capture those MIDI messages. Or is there a SPi construct which actually saves this kind of output? That’s one which could have broader appeal.

After that, I’ll probably work with said MIDI file in a DAW. My dream, though, would be to have a way to embed MIDI files in learning material with some way to visualize the content… without staff notation. My thinking has focused on “pianoroll” notation, which would be eminently appropriate in this specific case. (I’d argue that it’d work for an overwhelming proportion of what is labelled “music theory”.)

…then if I can get all of that, I could create some Open Educational Resources based on the musical applications of Set Theory.

You seem to be on a roll!

Have you looked at this, it’s by @amiika midilib? Might help you out.
I know @robin.newman has done similar work. Perhaps have a look back through posts.

I’ve spent the last two-ish days going through Jupyter Notebooks. Sounds ideal for your learning resource.

Couldn’t find the link earlier, but this is an even better example using React - Let’s Learn About Waveforms

1 Like

This is cool stuff. I had a little play and added sound output both via synth and via midi to my vital synth. Sounded nice. My code could be refined.was a quick lash-up.
NB amend patsh to PCSet.new and midi port: to suit your setup.
If anyone else wants to try. Run pcset.rb alone FIRST before adding the other code, or you may get an error.

use_synth :tri
use_midi_defaults port: "iac_driver_sonicpi",channel: 1
use_debug false
define :mplay do |nl|
  if nl.is_a?(Array)
    nl.each do |x|
      midi x,sustain: 0.2
    end
    
  else
    midi nl,sustain: 0.4
  end
end

run_file "~/Documents/SPfromXML/pcset.rb"

noodl = PCSet.new([0,1,4,5,6,7,10])
noodlsubs = []
base_note = :c4
use_transpose note(:c4)
for i in 0..(noodl.length) do
    noodlsubs = noodlsubs + noodl.pitches.combination(i).to_a
  end
  
  
  noodlsubs.each.with_index do |x,i|
    puts  x if i>0
    x.each do |y|
      play y,release: 0.2
      mplay y
      sleep 0.2
    end
    play x,release: 0.4
    mplay x
    sleep 0.5
  end

EDIT maybe add use_midi_logging false

1 Like

Yeah, I came to the same conclusion after work, today. Especially because pandas works so well in Jupyter Notebooks.
I’ve used Jupyter Notebooks for diverse things over the years, including a knowledge transfer document. Plus, Google Research has Colab, an online version which can use Github as a sharing platform. For a noncoder like me, it’s close to ideal. Plus, I noticed some attempts by Pressbooks.org to integrate Jupyter.
A few days ago, I did check on some Python/Jupyter resources having to do with music. I did work with sound in Python when I first got into Raspberry Pi. What’s neat, though, is that it’s also possible to do so in Jupyter Notebooks. Haven’t checked on support for MIDI files but I’d be surprised if it were complicated.

Really nice! Exactly in the logic I’m applying, here. I want to explore the possibilities of that scale, especially in terms of hearing things together. So, what would be chord/scale relationships in Jazz improvisation methods.

Something I notice is that common forms are really striking. Like [0,4,7] (the major triad).

One thing I’d probably do, in my case, is try different voicings, including those which are really widely spread out (especially in the lower register). Those have a large impact on perceived dissonance, for harmonic intervals. Probably through a random +12 or -12 here and there. Or maybe something more structured.

And, yes, setting a base note is essential for playback (I was just thinking about that name as I was walking; because it’s not the root or tonic, it’s really the base).
For MIDI files, I was thinking that having them start at 0 (C-2) might actually make sense as they remain outside the audible range (on most synths) yet you can easily bring them out by shifting octaves.

Speaking of MIDI files and use_midi_logging, would you happen to know a way to produce MIDI files from SPi?

Thanks for this as well!
Yes, it’s the kind of example I find inspiring. There was one, recently, explaining the electronics behind some Roland synths. I found the pedagogical value to be rather high, in part because the author was providing guidance in the learning process. Comeau’s example has some of that as well, including in questions asked here and there.
Also reminds me of Bret Victor’s work on learnable programming. I find it really applies to music. Because, well, there’s a lot of coding in music.

(For the record, I’ve often described the pedagogical value of @samaaron’s tutorials with a special emphasis on those moments when he asks learners to think about what might happen before trying it. As Veritasium’s Derek Muller contends, this is how we avoid the common problem of “science videos” and such reinforcing preconceptions instead of helping people gain new knowledge.)

Also useful to note that music21 works within Jupyter Notebooks…
http://web.mit.edu/music21/doc/developerReference/installJupyter.html

1 Like

Yes. Had not tried music21 before. Have installed it and playing around in the notebook. As I have MuseScore on this computer, it’s also displaying notation in the notebook. Truly powerful stuff.

I think a voicing opt for SPi would be a really good addition. Voicing plus inversion leads to really interesting combinations. You could probably set a midpoint where anything below this is voiced with a larger spread and as they go over the midpoint, they become close voiced. I do a similar thing deployed in modular, where the higher the pitch/frequency then this is used to dampen the VCA and the lower the frequency then the VCA opens up more. The same principle can be applied to filters etc.

Agree about @samaaron vids and documentation. Both convinced me to dip my toe in the water, as it were. Active in the process.

Have you read or heard of Hal Galper’s book Forward Motion? His website has some info and articles, but I think it’s worth getting. I have both hard copy and electronic! He’s one of those last links to the first gen bop players. The first chapter is great intro to the practice, where he takes a Bach Violin piece and then transforms it into a dissonant jazz improv. Yet still maintains Bach’s note leading.

1 Like

Giving it a whirl as well. Powerful indeed.
Of course, there are things I wish I could change, from the practical to the “philosophical”.

On a practical level, I really wish there were some form of embedded viewer for MusicXML. On my Mac, the most recent version of MuseScore throws lengthy errors every time I do .show(). I know there are other options. Not explained really well. An embedded viewer would be really swell. Especially if it were something like this:

Also practical, I have an issue with the implementation decision to use of - as “♭”. I get the argument that ‘b’ would make it confusing in terms of the note ‘B’. Yet there’s plenty of ways to make things confusing in music21 and it’s common to use case to distinguish the note from the alteration. In fact, there’s an actual confusion with the note C-1. If I type it as such, it’s MIDI note 23, “C♭octave 1”. If I display MIDI note 0, it’s also displayed as C-1.
Besides, I wish ‘♯’ and ‘♭’ were used for inline display instead of ‘#’ and ‘-’.
So, sure, I could file an issue and explain things in a way which would be convincing to people who are open to suggestions. Better yet, I could do a PR. Yet I’m sure I’d be told it’d break everything since music21 has been going on for years and getting into these lower octaves is too uncommon a case. Problem with that is, people who use MIDI do conceive of C-1 as a useful message. Indeed, we typically have MIDI note 0 as C-2 and use that for, say, keyswitches.

My deeper wishes are with the overall approach. Which gets us to these practical decisions (including the use of MusicXML). As the library is designed (and implemented) for computer-assisted musicology and since musicology has been dominated by “Western Art Music”, music21 focuses on “standard notation”. Sure, it has ways to do all sorts of crazy things with notation. And that gets really involved, really quickly. (Heard of half-flats but… triple flats? Suuuure.)
They make comments about MIDI not being able to distinguish enharmonic notes from one another. Yes, it’s true that G♭ is a different note from F♯… in a given context. There are other ways to get that through. Especially now that we have wider support for diverse tuning systems (including in Sonic Pi!).

And the documentation would afford a bit of a remake, if it were me. Let’s quickly get to some of the most interesting things and backtrack to the knowledge needed to make those work. While I don’t drive, I feel like it’s the instructional manual to a powerful car which spends more time explaining how you can use the trunk than telling you how to start the engine.

I mean, I get it. It’s a powerful library for someone who’s steeped in a certain tradition. It’s not meant to be used for the kind of case I have in mind which involves people who might have had experiences with DAWs or other tools for electronic musicking and haven’t necessarily “learnt to read music”.

One solution would be to fork it. That’s really not practical, for me as a noncoder.
Since it’s a library and it does work with Jupyter, I might as well create some of my own interactive documentation and only use the library as needed.

Oh, and the thing about alterations. Isn’t it a contradiction that I want to have ‘♯’ and ‘♭’ when I want to avoid “standard notation”? Thing is, these alterations are common in musicking practice, including among people who never used “sheet music”. And those symbols are quite distinctive. So I find value in them in the learning process. Even more so than with the Anglocentric letter notation itself (which I wouldn’t avoid; just want to nudge people towards numbered notations).

That’s a lot.

I think for your distinctions, could you not create a wrapper object to manage this. This is one of the reasons why Brinkman’s CBR (Continuous Binomial Representation) avoids the issues that you’re highlighting.

I have never believed that Midi and notation are parallels. Midi is really a representation of the piano, nt of everything available. To a machine, midi note 23 is precisely that. To a human it can be different things in different systems. best to separate them in my view.

Definitely agree about ‘♯’ and ‘♭’. That has struck me as odd. Seems like a bit of ‘lazy’ programming from an earlier period to resolve the differences. The use of ‘-’ forces the user to adopt an approach that breaks with its symbolic representation, especially as ‘-’ can also mean minor in other symbolic systems.

I think for your other notation systems, you might need to go to IRCAM and look at what they have. I’ve always wondered why notation systems do not allow to define an object or shape (as a macro structure) which you then populate with any number of events.

Could you not generate the midi for what you want to explore and then import these into music21? One of the reasons I like SPi is quick access to functions to rotate, reverse, mirror, reflect, scale, invert, transpose etc. Essentially make many permutations and combos, as @samaaron has done the heavy lifting.

Most notation conventions have been derived from the Germans! The US adopted Germanic conventions such as 1/4, 1/2, 1/8 notes. The UK adopted a very bizarre approach that ‘breaks’ the more straightforward connection employed by the German system. Symbolic shorthand ‘♯’ and ‘♭’ is a standard, just that institutions created a ‘false’ division, that was not present in practice. This is why classical instrument players lost the art of improvisation. In this sense I like the Italian school of Partimento. The sketches are roughly analogous to a chord/lead sheet, where the trainee composer had to flesh out the skeleton into a full piece. The advantage of the sketch is that is an organised schema, much like a tune in the real book etc, just requires realising. Partimento also endured a hard time from the newly formed conservatoires and fell out of fashion until the 1940’s, when there was a resurgence in interest. I have some copies of the sketchbooks and it places a completely different perspective on musical training in respect of the practical arts and what things could have been like if it were not for the conservatoires (French and English).

In the end, I think you’ll need to decide whether the advantages outweigh the issues and how these issues, at least the known ones, can be managed as you do your work. Despite the extent to which these software have so much going for them, it is always surprising how quickly individual cases expose flaws/shortcomings etc. Even in developed DAWs, still run into stuff that’s head scratching. Ableton 11 has only just got enharmonic note naming. Previously made it really difficult when teaching to say A# is really Bb etc, and to make excuses for the shortcomings.

BTW: did you look at Pyknon? I can’t get it to install via conda or pip, looks useful.

Sure is. And I don’t even feel bad about it. :slight_smile:

Yes, there are solutions. I’m generally using Sonic Pi to “think out loud”.

And I’ll get back to the MIDI-based grid notation (“piano roll” et al.). It’s become a de facto standard. Which is indeed pianocentric. My approach to learning does entail using that as leverage.
Much easier to learn than staff notation, I find.

(I’ll go back to the rest, including Galper. Slightly late for work. :wink: )

1 Like

Really good point. Was thinking about something similar though not really thinking about a breakpoint. Almost like mapping different sounds to different MIDI notes… :thinking:

I’m sure Sam has some testimonials. What might be missing, among those who never tried Sonic Pi, is the experience of exploring from that point on.
In fact, I’ve spoken with some people who’ve gone through the tutorial on their own and were explaining that it didn’t stick with them. Based on those conversations, I have a hunch as to what happens with SPi, sometimes. They get to it with a different mindset from the one which works so well with that musicking environment. Having observed it with teachers during workshops, it’s this kind of “I need a better rationale for learning this… instead of something else”.
So, testimonials might not be enough.

As for Galper, I do get what’s interesting, here. It does remind me of a lot of what I notice on YouTube. And it does fit with the “theory” part of “music theory”. My instinct as a researcher is to test the hypotheses contained… and possibly check for biases. Thankfully, some of those YouTubers have been addressing some of these things.

Last night, I spoke with the same music student (and barista) who cued me into Set Theory. He have me several suggestions, including about Henry Threadgill, Steve Coleman, and other Pi Recordings artists. It does sound like those might come closer to the exploratory approach I might want to take.

Good to know! Thanks for the headsup. (It rings a bell. Have yet to get into it. Might also relate to the thread about Ziffers.)

Fair. Part of the reason I still want to try using MIDI as a basis is that it’s in practical use by large numbers of people (even when they sadly call themselves “MIDIots”, they typically use MIDI without noticing).
Another part relates to what you’re saying: I find it easier to make the point that they’re relative values by bringing them back to MIDI. Part of what I have in mind is quite similar to the transposition functions in Sonic Pi or even variations we can apply to pitch (for instance using different tunings). We always use a reference and it’s useful to realize that this reference shifts in diverse ways. The advantage of having a shared reference (such as MIDI’s 0–127 assignments for notes) is that we can make those shifts very explicit.

Besides, I plan to use a number of “piano roll” representations which are equivalent to “folding view”. So, you can easily get an idea of internal movement in a cell without associating individual notes to their pitches. That’s actually the part which is closer to Rousseau’s points (in 1748) about numbered notation.

Exactly. The fact that C-1 can mean either B1 or C-1 (MIDI notes 23 and 0) is really strange. Haven’t yet noticed how they notate the minor triad but C-7 would be quite legitimate as chord notations yet clash with the note B7.
I did eventually find out that you can display ‘♭’ and ‘♯’ by using .unicodeName (or .unicodeNameWithOctave). Haven’t found a way to have those same characters as input.

Yeah, I think we’re on to something. And, yes, IRCAM folks have often wondered about similar things.
Which gives me flashbacks from my college days in music school as well as my university years hanging out in the Faculty of Music. Especially conversations with a friend of mine from those days. (The music student/barista is his son.)
As Francophones, my friend and I had a lot to say about IRCAM. Not all positive, especially at the time. Still, we got echoes of research going on there and would encounter people who’d work there for some time. Eventually, when Philip Tagg came to Montreal, it was after he had done some with IRCAM, if I remember correctly.

True. Which is something I do plan to do. At the same time, I get the impression that some of these functions might be easily accessible in Python as well. (And while I realize that the music21 is really about the analysis side of thing, it does sound like we can leverage it to create patterns as well.)

Interesting. I don’t recall encountering that.

As a bilingual ethnomusicologist, let me just say that this phrase captures something which has been on mind for decades. Including before I heard about Kingsbury.
http://tupress.temple.edu/book/3357

Hm… :thinking:
My approach is more about the co-design process. I’m not making decisions. I’m exploring possibilities. I’ll eventually learn enough that I’ll be able to ideate, prototype, and test with others (particularly with learners whose understanding of music varies a lot). Especially since so much of the supposed advantages of existing approaches go unchallenged, at least in practice.

And I don’t perceive that many advantages in keeping enharmonic note naming. I sure understand why it exists. In a specific tradition. (Including the fact that the two notes need not sound the same.) I find that the distinction has outlived its purpose and confuses people more than anything.

Going back to musical applications of Set Theory. Though pitch classes explicitly remove the distinction between enharmonics, there’s no reason not to add them back if we do get different notes. Then, though, they cease to be “the same note on the piano” and there are other ways to name or represent them. There are also other ways to treat them, which need not refer to a preset functional system. A lot of people use Live without thinking through notes. After all, it’s a big looping sampler. So, a sample of a whole melodic pattern that you put on the “A♭” pad on your controller isn’t playing that note. You’re playing with the sound itself.
What I find more important about Live 11 is that Ableton finally made peace with MPE. Hopefully, that augurs well for MIDI 2.0 (which was officially adopted in January 2020, let’s not forget). Now, that’s opening up real possibilities! Yes, “microtuning”. Also having multiple ways to play the same note. A lot of data that you can transmit between a DAW and other devices (including software ones).

DAWs have limitations, sure. DAWfree musicking is indeed freeing. The reason I’m interested in locating a learning process in a DAW instead of notation software is that it’s already happening. Meeting learners where they learn, supporting them in their learning pathways.
Plus, DAWs are incredibly flexible, nowadays. Including freeware, Free Software, and Freemium ones.

Which isn’t taking anything away from Sonic Pi, of course. Quite the contrary! I find that there’s a stronger connection to make between DAW musickers and this whole approach to learning (including Sonic Pi) than between scorers and Sonic Pi on its own.

Nope. Thanks for the tip.
His book’s title sounded intriguing. And there’s nerdy stuff in there. Not sure it provides stuff we don’t have elsewhere.

Ha, funny Kingsbury’s book is in my bookcase, one row above Brinkman!

I had a look at Brinkman and you need to read chapter 6. That’s the one most specifically applicable to your scenario. He goes through everything from pitch class sets, pitch codes (pc), continuous pitch codes (cpc), highlights issues with all of them and then discusses Binomial Representation (BR) and Continuous Binomial Representation (CBR). The latter is probably the one best applicable to your requirements.

Don’t get me wrong, MIDI is the right way to go, I’m not trying to put you off. It’s more that music21 peeps, lay the ‘blame’ on Midi’s deficiency, rather than implement Brinkman’s code.

IRCAM are funny. Miller Puckette has never quite forgiven them for commercialising MAX.

Yes, Python does seem to have these functions. SPi has the edge in real time interaction. So much so, I’ll be attempting to teach it to a bunch of technophobic singers!

Partimento, along with some other ideas like the Rule of the Octave, reveal the improvised core that got hammered out of classical music from the 19thC onwards. Leading to specific tasks by specific classes of musician i.e. instrumentalist, organ players who improvise, and composers who write the material and might play an instrument(s) and/or improvise.

Some people do challenge the practices. You need to work in an institution built around Logic and Pro Tools and then talk to them about Ableton. It’s always funny how quite liberal minded individuals can become vehemently opposed to tools of expression, or other ways of achieving similar aims!

I see Ableton as way more than a looping sampler. It enables an expansion of the material forms of what could be considered music. Yes, to all of this regarding MPE. There are many more possibilities now that it’s integrated with Live. Combined with follow actions and various other randomising functions, the number of possibilities has increased. The pad thing is also true of the keyboard. The connection between pressing pad/key and what comes out and their correspondence has been disrupted.

Yes, Pyknon book is possibly worth reading, if only from the ‘music as viewed by a programmer for other programmers’.

Think you might like this video with Chris Ford. Different programming language but interesting none the less.

Another library to consider, though not in Python. David Huron’s Humdrum. You might be able to ‘mine’ it for some more info on Pitch Sets etc. i