Exploring “Modes” of Pitch-Class Sets Using `chord_invert`

Simple idea… after a few attempts.
Wanted to explore the “modes” (rotations) of a given pitch-class set.

Ended up with:

use_bpm 80
use_synth :blade
f7_19=(ring 0,1,2,3,6,8,9)

live_loop :setmodes do
  use_transpose 47
  play_pattern_timed (chord_invert f7_19, 5+tick), [0.25,0.125]
end

I find chord_invert more suitable than rotate because (at least in my first attempts, with a scale). rotate sticks to the same octave.
If I use a scale instead of a ring, I need to add .but_last(1) because it’ll double the octave from the root, otherwise.

Without use_transpose, it starts “arpeggiating” from note 0, which isn’t audible. And my favourite mode is actually the fifth mode from the “prime form” (0,1,2,3,6,8,9). On a windcontroller, that’s (G, A♭, B, C, D♭, F). As mentioned elsewhere, that’s my “noodling scale”. Really fun to play with this, especially with a few ideas from Set Theory.

In a way, I kind of wish I could define any pitch-class set as a scale, and use the scale operations. Still, it works fairly well.

I also wish I could display the note info for each note. Guess I would have to rebuild play_pattern_timed to add puts note_info for each note.

Even better would be a way to output as a MIDI file. Yet even better still would be to have some kind of note visualization (say, in a note grid like a “piano roll”).

At this point, though, I’ll probably try to explore “subsets”: what are all the note combinations in such a pitch-class set. In other words, which chords are found in this scale.

It’d also be fun to apply Hanon-style patterns to this set, as I can do in the Tessitura Pro app.

Hi @enkerli not sure this will help in terms of defining pitch class sets but here goes.

Obv you can split your fav scale into a number of intervals. As it is essentially three double stops a major third apart:
B and B
Ab and C
Db and F

You could use the chord degree and only take the first two notes then combine everything. This code is quick and dirty:

p1 = (chord_degree :i, :G3, :major, 2)
p2 = (chord_degree :i, :Ab3, :major, 2)
p3 = (chord_degree :i, :Db4, :major, 2)

pOut = p1 + p2 + p3
puts pOut.sort

live_loop :setmodes do
  use_transpose 0
  pat = (chord_invert pOut.sort, 5+tick)
  play_pattern_timed pat, [0.25,0.125], release: 0.2
end

I changed the transpose as chord_degree identifies a register.

Obvs. with pat you can apply any number of ring chain commands to get variations for your outputs.
Not sure it behaves precisely the way you intend but might assist.

Thought of a slightly more economical way. Divide your fav scale into one four note structure (G, B, Db, F) in sonic pi this is the '7-5', and a two note structure (Ab, C). End result is the same.

p1 = (chord :G2, '7-5')
p2 = (chord_degree :i, :Ab2, :major, 2)

pOut = p1 + p2 

live_loop :setmodes do
  use_transpose 0
  pat = (chord_invert pOut.sort, 5+tick)
  play_pattern_timed pat, [0.25,0.125], release: 0.2
end

Not quite what I had in mind. (Eventually, I’d really just want to play with any arbitrary pitch class set without having to think about what it includes.)
Still interesting, especially since I didn’t know about ~pat~ about concatenating rings in this way.

Sure. I changed your PC Set to a literal, to draw from the different scales already in SPi.
You can split the scales into upper/lower tetrachords, triads and dyads.

Have you read Alexander Brinkman’s Pascal programming for music research which is full of encoding music as PC sets for analytical purposes. You might be able to customise for your purposes.

1 Like

Oh? How does that work?
Is it one of 'em to_ methods? Didn’t work with to_s, to_a, or to_i.

Been trying to do some ring operations on those (say, .reverse) and I’m getting undefined method for PCSet.

I haven’t, no. And I have zero knowledge of Pascal. :sweat:

I don’t know Pascal either but I used it to help me create these sorts of operations back when I used to use Macromedia Director. Didn’t do too badly converting the ideas from Pascal to Lingo. Things improved when Lingo got dot syntax and you could create objects etc.

It’s really how he goes about encoding pitch sets using code. As there are pitch sets already known/encoded, such as with Forte amongst others, isn’t it just a case of making a large 2d of rings/arrays and then randomly pulling one from the 2d set of pitch sets?

I meant literal in that it referred to specific note positions within a chromatic scale, whereas your implementation relies upon distances between note position. Essentially this is a semantic difference.

The ring chain command is on the line pat = (chord_invert pOut.sort, 5+tick) before you hand it off to play_pattern_timed. For example:

p1 = (chord :G2, '7-5')
p2 = (chord_degree :i, :Ab2, :major, 2)

pOut = p1 + p2 

live_loop :setmodes do
  use_transpose 0
   pat = (chord_invert pOut.sort.mirror.take(3), 5+tick)
  play_pattern_timed pat, [0.25,0.125], release: 0.2
end

Change the number for take to produce different length lines etc, or remove mirror and replace with something else. There some interesting outputs possible.

Hope that helps.

That’s the approach I was thinking of taking, in whichever language. At the very least, to associate each set with common labels (“Mu chord”, “Blues scale”…).
One issue I tend to have with most approaches to PCSs is that only one prime form has its own Forte number (its inversions isn’t distinct). I like the Wikipedia approach (which uses normal form, technically).

I think I get that. And I’d like to merge the two approaches.
Interestingly, scale.rb defines scales by sequences of intervals… and the PCSet uses intervals internally.

Ha! I think I can adapt that to my needs.

I’m still a long way from being able to identify subsets/supersets… but I do have example code here and there which helps.
I really like this one, as I find the code legible at my level.
https://www.mta.ca/pc-set/calculator/pc_calculate.html

Ha! I just solved my issue with NoMethodError !
It was, unsurprisingly, very simple. The actual array of notes is in .pitches.
So, if I want to use those convenient methods for working with arrays, I need to address those pitches directly.

run_file "/Users/alex/Documents/GitHub/Ruby-PCSet/pcset.rb"
noodl = PCSet.new([0,1,4,5,6,7,10])
puts noodl.invert ## Inverts and keeps the sequence from high to low
#puts noodl.invert.sort ## Throws a NoMethodError
puts noodl.invert.pitches.sort ## Works as expected

Kinda like the .notes method from PCset. Wonder if I could call that method directly from an arbitrary array…
Sounds easy enough, to be honest…

  def notes(middle_c = 0)
    noteArray = ['C','D♭','D','E♭','E','F','G♭','G','A♭','A','B♭','B'] ## Converting to flats with the actual sign
    if @base != 12 then raise StandardError, "PCSet.notes only makes sense for mod 12 pcsets", caller end
    out_string = String.new
    transpose(-middle_c).pitches.each do |p|
      out_string += noteArray[p] + ", "
    end
    out_string.chop.chop
  end
  def transpose(interval)
    PCSet.new @pitches.map {|x| (x + interval) % @base}
  end

EDIT:
Ah, yes… Somewhat clunky. Rather useful, I find.
Reconvert an array to a PCSet and address .notes of that array.

run_file "/Users/alex/Documents/GitHub/Ruby-PCSet/pcset.rb"
noodl = PCSet.new([0,1,4,5,6,7,10])
puts noodl.prime # Results: [0, 1, 2, 3, 6, 7, 9]
puts noodl.invert.prime # Results: [0, 1, 2, 3, 6, 7, 9]
puts noodl.normal_form.zero # Results: [0, 1, 2, 3, 6, 8, 9]
puts noodl.invert.normal_form.zero # Results: [0, 1, 2, 3, 6, 7, 9]
puts noodl.transpose(7).notes # Results: "G, A♭, B, C, D♭, D, F"
puts PCSet.new(noodl.invert.pitches.sort).transpose(7).notes # Results: "G, A, C, D♭, D, E♭, G♭"

Which really makes it obvious that the two scales are quite different. It’s the exact same prime form for both ([0, 1, 2, 3, 6, 7, 9]). The normal_form at zero for my input is slightly different ([0, 1, 2, 3, 6, 8, 9]). The resulting G scale is really different.

1 Like

Sounds like you’re really making progress. It’s great working through these issues and coming out the other side!

Looking forward to more.

1 Like

Of course, finding all subsets is easy… when you know where to look.

Was focusing on pitch-class sets. Doing a search for “ruby finding subsets in a set” I got there:

And, yes, the .combination method does work.

run_file "/Users/alex/Documents/GitHub/Ruby-PCSet/pcset.rb"
noodl = PCSet.new([0,1,4,5,6,7,10])
noodlsubs = []

for i in 0..(noodl.length) do
    noodlsubs = noodlsubs + noodl.pitches.combination(i).to_a
  end
  
  puts noodlsubs

Of course, I now need to cull the results. And/or categorize them. If multiple combinations are tightly related (like modes of symmetrical scales, say), I pretty much need to know that. There again, Set Theory should come to the rescue. Getting primes for each combination will be helpful, I guess. Then, grouping subsets sharing the same prime… and somehow comparing them.
Once I have a list of unique combinations, I can process each of them as a PCSet and, for instance, output the notes.

Better yet, maybe there’s a way to output a MIDI file with all of them? Unlike the rest of my journey, this is one which is more about SPi itself than about Ruby.
I know I can output MIDI and there might be a command-line option to capture those MIDI messages. Or is there a SPi construct which actually saves this kind of output? That’s one which could have broader appeal.

After that, I’ll probably work with said MIDI file in a DAW. My dream, though, would be to have a way to embed MIDI files in learning material with some way to visualize the content… without staff notation. My thinking has focused on “pianoroll” notation, which would be eminently appropriate in this specific case. (I’d argue that it’d work for an overwhelming proportion of what is labelled “music theory”.)

…then if I can get all of that, I could create some Open Educational Resources based on the musical applications of Set Theory.

You seem to be on a roll!

Have you looked at this, it’s by @amiika midilib? Might help you out.
I know @robin.newman has done similar work. Perhaps have a look back through posts.

I’ve spent the last two-ish days going through Jupyter Notebooks. Sounds ideal for your learning resource.

Couldn’t find the link earlier, but this is an even better example using React - Let’s Learn About Waveforms

1 Like

This is cool stuff. I had a little play and added sound output both via synth and via midi to my vital synth. Sounded nice. My code could be refined.was a quick lash-up.
NB amend patsh to PCSet.new and midi port: to suit your setup.
If anyone else wants to try. Run pcset.rb alone FIRST before adding the other code, or you may get an error.

use_synth :tri
use_midi_defaults port: "iac_driver_sonicpi",channel: 1
use_debug false
define :mplay do |nl|
  if nl.is_a?(Array)
    nl.each do |x|
      midi x,sustain: 0.2
    end
    
  else
    midi nl,sustain: 0.4
  end
end

run_file "~/Documents/SPfromXML/pcset.rb"

noodl = PCSet.new([0,1,4,5,6,7,10])
noodlsubs = []
base_note = :c4
use_transpose note(:c4)
for i in 0..(noodl.length) do
    noodlsubs = noodlsubs + noodl.pitches.combination(i).to_a
  end
  
  
  noodlsubs.each.with_index do |x,i|
    puts  x if i>0
    x.each do |y|
      play y,release: 0.2
      mplay y
      sleep 0.2
    end
    play x,release: 0.4
    mplay x
    sleep 0.5
  end

EDIT maybe add use_midi_logging false

1 Like

Yeah, I came to the same conclusion after work, today. Especially because pandas works so well in Jupyter Notebooks.
I’ve used Jupyter Notebooks for diverse things over the years, including a knowledge transfer document. Plus, Google Research has Colab, an online version which can use Github as a sharing platform. For a noncoder like me, it’s close to ideal. Plus, I noticed some attempts by Pressbooks.org to integrate Jupyter.
A few days ago, I did check on some Python/Jupyter resources having to do with music. I did work with sound in Python when I first got into Raspberry Pi. What’s neat, though, is that it’s also possible to do so in Jupyter Notebooks. Haven’t checked on support for MIDI files but I’d be surprised if it were complicated.

Really nice! Exactly in the logic I’m applying, here. I want to explore the possibilities of that scale, especially in terms of hearing things together. So, what would be chord/scale relationships in Jazz improvisation methods.

Something I notice is that common forms are really striking. Like [0,4,7] (the major triad).

One thing I’d probably do, in my case, is try different voicings, including those which are really widely spread out (especially in the lower register). Those have a large impact on perceived dissonance, for harmonic intervals. Probably through a random +12 or -12 here and there. Or maybe something more structured.

And, yes, setting a base note is essential for playback (I was just thinking about that name as I was walking; because it’s not the root or tonic, it’s really the base).
For MIDI files, I was thinking that having them start at 0 (C-2) might actually make sense as they remain outside the audible range (on most synths) yet you can easily bring them out by shifting octaves.

Speaking of MIDI files and use_midi_logging, would you happen to know a way to produce MIDI files from SPi?

Thanks for this as well!
Yes, it’s the kind of example I find inspiring. There was one, recently, explaining the electronics behind some Roland synths. I found the pedagogical value to be rather high, in part because the author was providing guidance in the learning process. Comeau’s example has some of that as well, including in questions asked here and there.
Also reminds me of Bret Victor’s work on learnable programming. I find it really applies to music. Because, well, there’s a lot of coding in music.

(For the record, I’ve often described the pedagogical value of @samaaron’s tutorials with a special emphasis on those moments when he asks learners to think about what might happen before trying it. As Veritasium’s Derek Muller contends, this is how we avoid the common problem of “science videos” and such reinforcing preconceptions instead of helping people gain new knowledge.)

Also useful to note that music21 works within Jupyter Notebooks…
http://web.mit.edu/music21/doc/developerReference/installJupyter.html

1 Like

Yes. Had not tried music21 before. Have installed it and playing around in the notebook. As I have MuseScore on this computer, it’s also displaying notation in the notebook. Truly powerful stuff.

I think a voicing opt for SPi would be a really good addition. Voicing plus inversion leads to really interesting combinations. You could probably set a midpoint where anything below this is voiced with a larger spread and as they go over the midpoint, they become close voiced. I do a similar thing deployed in modular, where the higher the pitch/frequency then this is used to dampen the VCA and the lower the frequency then the VCA opens up more. The same principle can be applied to filters etc.

Agree about @samaaron vids and documentation. Both convinced me to dip my toe in the water, as it were. Active in the process.

Have you read or heard of Hal Galper’s book Forward Motion? His website has some info and articles, but I think it’s worth getting. I have both hard copy and electronic! He’s one of those last links to the first gen bop players. The first chapter is great intro to the practice, where he takes a Bach Violin piece and then transforms it into a dissonant jazz improv. Yet still maintains Bach’s note leading.

1 Like

Giving it a whirl as well. Powerful indeed.
Of course, there are things I wish I could change, from the practical to the “philosophical”.

On a practical level, I really wish there were some form of embedded viewer for MusicXML. On my Mac, the most recent version of MuseScore throws lengthy errors every time I do .show(). I know there are other options. Not explained really well. An embedded viewer would be really swell. Especially if it were something like this:

Also practical, I have an issue with the implementation decision to use of - as “♭”. I get the argument that ‘b’ would make it confusing in terms of the note ‘B’. Yet there’s plenty of ways to make things confusing in music21 and it’s common to use case to distinguish the note from the alteration. In fact, there’s an actual confusion with the note C-1. If I type it as such, it’s MIDI note 23, “C♭octave 1”. If I display MIDI note 0, it’s also displayed as C-1.
Besides, I wish ‘♯’ and ‘♭’ were used for inline display instead of ‘#’ and ‘-’.
So, sure, I could file an issue and explain things in a way which would be convincing to people who are open to suggestions. Better yet, I could do a PR. Yet I’m sure I’d be told it’d break everything since music21 has been going on for years and getting into these lower octaves is too uncommon a case. Problem with that is, people who use MIDI do conceive of C-1 as a useful message. Indeed, we typically have MIDI note 0 as C-2 and use that for, say, keyswitches.

My deeper wishes are with the overall approach. Which gets us to these practical decisions (including the use of MusicXML). As the library is designed (and implemented) for computer-assisted musicology and since musicology has been dominated by “Western Art Music”, music21 focuses on “standard notation”. Sure, it has ways to do all sorts of crazy things with notation. And that gets really involved, really quickly. (Heard of half-flats but… triple flats? Suuuure.)
They make comments about MIDI not being able to distinguish enharmonic notes from one another. Yes, it’s true that G♭ is a different note from F♯… in a given context. There are other ways to get that through. Especially now that we have wider support for diverse tuning systems (including in Sonic Pi!).

And the documentation would afford a bit of a remake, if it were me. Let’s quickly get to some of the most interesting things and backtrack to the knowledge needed to make those work. While I don’t drive, I feel like it’s the instructional manual to a powerful car which spends more time explaining how you can use the trunk than telling you how to start the engine.

I mean, I get it. It’s a powerful library for someone who’s steeped in a certain tradition. It’s not meant to be used for the kind of case I have in mind which involves people who might have had experiences with DAWs or other tools for electronic musicking and haven’t necessarily “learnt to read music”.

One solution would be to fork it. That’s really not practical, for me as a noncoder.
Since it’s a library and it does work with Jupyter, I might as well create some of my own interactive documentation and only use the library as needed.

Oh, and the thing about alterations. Isn’t it a contradiction that I want to have ‘♯’ and ‘♭’ when I want to avoid “standard notation”? Thing is, these alterations are common in musicking practice, including among people who never used “sheet music”. And those symbols are quite distinctive. So I find value in them in the learning process. Even more so than with the Anglocentric letter notation itself (which I wouldn’t avoid; just want to nudge people towards numbered notations).

That’s a lot.

I think for your distinctions, could you not create a wrapper object to manage this. This is one of the reasons why Brinkman’s CBR (Continuous Binomial Representation) avoids the issues that you’re highlighting.

I have never believed that Midi and notation are parallels. Midi is really a representation of the piano, nt of everything available. To a machine, midi note 23 is precisely that. To a human it can be different things in different systems. best to separate them in my view.

Definitely agree about ‘♯’ and ‘♭’. That has struck me as odd. Seems like a bit of ‘lazy’ programming from an earlier period to resolve the differences. The use of ‘-’ forces the user to adopt an approach that breaks with its symbolic representation, especially as ‘-’ can also mean minor in other symbolic systems.

I think for your other notation systems, you might need to go to IRCAM and look at what they have. I’ve always wondered why notation systems do not allow to define an object or shape (as a macro structure) which you then populate with any number of events.

Could you not generate the midi for what you want to explore and then import these into music21? One of the reasons I like SPi is quick access to functions to rotate, reverse, mirror, reflect, scale, invert, transpose etc. Essentially make many permutations and combos, as @samaaron has done the heavy lifting.

Most notation conventions have been derived from the Germans! The US adopted Germanic conventions such as 1/4, 1/2, 1/8 notes. The UK adopted a very bizarre approach that ‘breaks’ the more straightforward connection employed by the German system. Symbolic shorthand ‘♯’ and ‘♭’ is a standard, just that institutions created a ‘false’ division, that was not present in practice. This is why classical instrument players lost the art of improvisation. In this sense I like the Italian school of Partimento. The sketches are roughly analogous to a chord/lead sheet, where the trainee composer had to flesh out the skeleton into a full piece. The advantage of the sketch is that is an organised schema, much like a tune in the real book etc, just requires realising. Partimento also endured a hard time from the newly formed conservatoires and fell out of fashion until the 1940’s, when there was a resurgence in interest. I have some copies of the sketchbooks and it places a completely different perspective on musical training in respect of the practical arts and what things could have been like if it were not for the conservatoires (French and English).

In the end, I think you’ll need to decide whether the advantages outweigh the issues and how these issues, at least the known ones, can be managed as you do your work. Despite the extent to which these software have so much going for them, it is always surprising how quickly individual cases expose flaws/shortcomings etc. Even in developed DAWs, still run into stuff that’s head scratching. Ableton 11 has only just got enharmonic note naming. Previously made it really difficult when teaching to say A# is really Bb etc, and to make excuses for the shortcomings.

BTW: did you look at Pyknon? I can’t get it to install via conda or pip, looks useful.

Sure is. And I don’t even feel bad about it. :slight_smile:

Yes, there are solutions. I’m generally using Sonic Pi to “think out loud”.

And I’ll get back to the MIDI-based grid notation (“piano roll” et al.). It’s become a de facto standard. Which is indeed pianocentric. My approach to learning does entail using that as leverage.
Much easier to learn than staff notation, I find.

(I’ll go back to the rest, including Galper. Slightly late for work. :wink: )

1 Like