Voice leading algorithm

More sharing from my Sonic Pi archive of experiments and ideas…

This one concerns “voice leading” which is a musical technique for making chord sequences sound nice(er).

To give a brief explanation, it helps to think about some notes on a piano. If you’re changing chords between C major and F major there are a couple of ways to do it (imagine each note is a “voice” like a singer in a choir)

C -> F
E -> A
G -> C

This is ok and gets used in a lot of rock and pop (think the first two chords of “Louie Louie”).

But if you want your music to sound like butter melting on a stack of freshly cooked pancakes, then this is better:

C -> C
E -> F
G -> A

Same set of notes on the right and the left but they’re now reordered slightly on the right to make sure that each voice moves as little as possible. This is voice leading in a nutshell and it’s a great thing to bear in mind when working with chords.

Now, is there an algorithm that can help us with this? Yes! It’s called the taxi cab metric and I stole it from the book “The Geometry of Music” by Dmitri Tymoczko. It’s in this gist here with an example https://gist.github.com/xavriley/1ea12a3d319dfcf86152

It looks pretty complicated but it’s not too bad really. That code is pretty scrappy though and I’m sure it could be more elegant. The reason I haven’t pushed for this to be in Sonic Pi proper yet is because I don’t know what it should look like.

Essentially it needs two arguments - a starting chord and a sequence (array) of chords to move to. It should probably yield an enumerator of nicely voice led chords as an output. If anyone has more ideas about how they’d like this to work then lets discuss…

13 Likes

Recently @JoeMac mentioned Michael New’s excellent series of music tutorial videos., and in particular one where he looks at getting smooth transitions in progressions which is similar to Xav’s ideas here. Useful to look at the video and example Joe gives Using Chord Inversion to Smooth a Chord Progression

2 Likes

Thanks Robin - it looks like that covers the same idea but the difference is that each inversion in @JoeMac’s code is being chosen manually. In theory, the algorithm here should be able to work these out automatically but trying those same chords I realise I have a bug in my implementation! I’ll see if I can fix it up and re-post.

For a really good example of the C -> F voice leading that I was talking about, Michael New covers the exact same thing here: https://youtu.be/Nr2XBoanNJY?t=6m23s

Here’s a possible solution

#ChordProgression1.rb
# 26 Oct 2017
##############
#helper function
def midi2note(n)
  nn=note(n)
  if nn==nil
    return nn
  else
    nn= note_info(nn)
    nnn=nn.to_s.split(":")
    mmm= nnn[3].chop
    return mmm
  end
end # midi2note
def listnotes(n)
  i=0
  while i<n.length
    puts midi2note(n[i])
    i+=1
  end
end

#Define tempo and note lengths and release fraction
#####
tempo=1.0  ### try changing tempo
#define note timings
whole=1.0
half=whole/2.0
dothalf=half*1.5
quart=half/2.0
dotquart=quart*1.5
eighth=quart/2.0
doteighth=eighth*1.5
sixteenth=eighth/2
#########
# function to normalize chord in n to octave oct
def norm(n,oct)
  oct+=1
  i=0
  m=[]
  while i<n.length
    m=m.push(n[i]%12)
    i+=1
  end
  i=0
  while i<m.length
    m[i]=m[i]+oct*12
    i+=1
  end
  return m
end

### try different keys
key= note(:e4)
puts midi2note(key)
puts " "
### try major
mode=:minor
oct=4 # define octave where ajusted chord notes will play
# define a chord progression using chord degree
# and normalize the chords to fit in the target octave
a=chord_degree :i, key, mode,3
listnotes(a)
a=norm(a,oct)
listnotes(a)
puts " "
b=chord_degree :vi, key, mode,3
#puts b
listnotes(b)
b=norm(b,oct)
#puts b
listnotes(b)
puts " "
c=chord_degree :ii, key, mode,3
listnotes(c)
c=norm(c,oct)
listnotes(c)
puts " "
d=chord_degree :v, key, mode,3
listnotes(d)
d=norm(d,oct)
listnotes(d)
puts " "
##########
use_synth :fm
with_fx :level, amp: 0.3 do
  
  i=0
  while i<5
    play a
    sleep quart*tempo
    play b
    sleep quart*tempo
    play c
    sleep quart*tempo
    play d
    sleep quart*tempo
    i+=1
  end
end
4 Likes

In an octave all the chords of any key can be played, although not always in their fundamental disposition in which the most serious note gives name to the chord.

Starting from an initial chord and regardless of whether the most serious note is the first, third or fifth of the chord, a single octave can be generated in which we can use all the chords.

There are only three possibilities of initial chord and I proceed to explain the process.

Initial chord 1 3 5, covers 8 notes you just have to add the previous two to the first note of the chord, and the two after the third note of the chord.

Ej D F A -> C Db D Eb E F Gb G Ab A Bb B

Initial chord 3 5 1, covers 10 notes you just have to add the previous one to the first note of the chord, and the one after the third note of the chord.

Ej F A D -> E F Gb G Ab A Bb B C Db D Db

Initial chord 5 1 3, covers 9 notes you only have to add the previous two to the first note of the chord, and the one after the third note of the chord.

Ej A D F -> G Ab A Bb B C Db D Db E F Gb

In this case we could also the one before the first note of the chord and the two after the third note of the chord.

Ej A D F -> Ab A Bb B C Db D Db E F Gb G

A simple algorithm that does this, would work

Hi

I revive this post because it is touching an area that I’m very interested in.

I used to create harmonic progressions with OpenMusic, but I used non tonal progressions.
This said nowdays I went back to the functionnal harmony.
The voice leading is a vast subject and work by set of rules.

One of them is the non following 5te and 8ve and the resolution of the tritone.

My questions is regarding your knowledge of functionnal harmony and his syntax. I think that if you get into that you’ll find more solutions.
This said, when I compose I use predefine chords that I previously wrote on a music sheet. I have limited knowledge of Sonic Pi
Very curious to see where this idea went since 2017

2 Likes

Hi @Lecavalier - thanks for bringing this up again

You’re right - there are a vast number of considerations to harmonic writing, many of which I haven’t addressed. The algorithm I linked above only solves for moving to the closest chord in the minimum number of steps. It doesn’t take account of other “rules” like parallel 5ths/octaves etc.

There are a few computational approaches to those rules but the ones I’ve seen tend to focus on harmonizing a given melody (cantus firmus) rather than the moves between two chords. Focusing on each chord in turn makes it easier to apply the rules computationally but harmonzing a whole melody might make more sense musically.

I think the thing that Sonic Pi is missing to tackle this is a good syntax to express chord progressions. If anyone has any ideas of what their ideal representation would be I’d love to hear them.

At the moment I’m leaning towards something like a parser that works on strings

chord_progression("Cmaj7 Am7 | Dm9 G13b9")

which converts those to an array of arrays (or ring of rings) behind the scenes. Then in the internals of that method it’s a good place to start applying rules and other algorithmic tricks. What do you think?

1 Like

Thanks so much for this “old code”, @xavierriley! Using inspiration from it, I was able to realize a longterm dream in playing chords with a bit of “automated voiceleading”. I now have a Plugdata patch and PdLua objects which implement a version of what you did in SPi. (My coding skills are really fragmentary. I’ve been using LLMs to get me over multiple humps.)

As much as I love Sonic Pi, I spend much of my musicking life with plugins, so I needed something which worked there. Recently purchased RapidComposer, an expensive app/plugin which is remarkably difficult to use. Still, it does most of the things I want to do and a whole lot more. Including sophisticated voiceleading (and voicing).

Was struck by the “Chord Rules” idea. Gave me the nudge I needed to “extract rules” from a corpus of Jazz Standards. As a result, I have scripts (in Python/music21, then in NodeJS with Tonal) which generate chord progressions based on the probabilities for chord transitions.
That was already a “gamechanger”, for me. Partly because the results are exactly what I needed. The progressions generated aren’t perfect, of course. And that’s part of the point. I can now tweak the dataset based on my own preferences.

To do so, I much prefer auditioning these transitions in voiceled mode. I don’t know how much I can enjoy certain transitions between chords when they’re in root position. A large part of what I appreciate comes from voiceleading potential, I’m finding out. And I’m a big fan of open voicings.
So… Based on the fact that I was able to generate chord progressions, I tried to apply some voiceleading to those chords using similar techniques. It didn’t work properly in NodeJS… and it worked right away in Lua, thanks to Claude AI.
That was something of a revelation, last Saturday. After years of trying, I had something I could use (and more or less understood) which took a chordname (including in functional notation through altered Roman numerals, e.g. bIIImaj7) and a reference voicing to output a voiceled voicing for that new named chord.
After a bit of work, I was able to implement that in PdLua which allows me to run the whole thing as a plugin patch in any DAW (on the desktop; PdLua support on iPadOS is running into issues with paths and such).

So, really, Xavier, your code was what allowed me to overcome an obstacle I had for years (meaning, long before your original post in this thread).
Had heard of Tymoczko and had tried to understand some of his work. And I realize (now) that his arca Python code contains everything I’d need.
Yet I don’t have the skills needed to get much of this.

When I asked LLMs to transcode from Sonic Pi to JS, I ran into issues I couldn’t solve on my own. Something was off and I’m not sure what it was.
I could notice that some of the code was about converting things to pitch classes and then doing some processing between pitch class sets. That made sense to me. The “taxicab norm” also made some sense, though I found the explanation… “cumbersome”.

I eventually used the following (verbose) prompt, which led me to a working solution:

I have a Pd-Lua project with several milestones and possible extensions. Pd-Lua is a special version of Lua which integrates with PureData, especially in Plugdata (a new flavour of PureData which can work as a plugin on different platforms). Once I have the Lua code, I should be able to integrate it in a PureData object.
The first milestone is to create a transition between chords by making a transition matrix from a list of four MIDI note numbers (‘voicings[0]’) and an ordered pitch class set (‘chords[1]’).
To break that down in steps to do in Lua:

  • I first need to convert each MIDI note number into its pitch class (which will become ‘chords[0]’). Should be easy. The pitch class of a MIDI note number is its modulo12.
  • Then, I need to calculate the relative distance between two pitch classes, such as the first element of chords[0] and the first element of chords[1]. If it’s going from 0 to 11, for instance, the value would be -1. Going from 7 to 0 is 5.
  • After that, I need to create a matrix with all of these relative distances (first element of the first ordered pitch class set with each element of the second, then the second element of the first pitch class set with each element of the second).
  • I then sum the absolute values of each possible transition from one ordered pitch class set to the other.
  • I can then pick the transition with the lowest sum of distances in absolute values (so, the “smoothest” path between chords[0] and chord[1]).
  • If two transitions have the same sum, I can pick at random, for now.
  • Once all of this is done, I can add the relative distances to voicings[0] to create voicings[1]. So, if the distance between the first element of chords[0] and chords[1] is -1 and the first element of voicings[0] has a MIDI note of 60, the first element of voicings[1] should have 59.
  • Once voicings[1] has been fully created (as an ordered list of four MIDI note numbers), I should send the full ordered list to the first outlet and each MIDI note number to a separate outlet.

There was some back-and-forth involved, obviously. Still, I quickly ended up with a working script in (commandline) Lua. In fact, it accounted for chords of different sizes by doubling some of the chord tones, which is something I wanted to implement later.

Prompting me for further steps, it got me to ask about voiceleading a whole progression in altered Roman numerals… which worked right away with some common chord qualities. I then converted the chord dictionary I had from TonalJS (in intervals, like 1P 3M 5P 7M) into ordered pitch class sets (0 4 7 11). And integrated the whole thing into two PdLua objects (one to convert chordnames into ordered pitch class sets, the other to do the voiceleading using an incoming voicing in MIDI note numbers as a reference).

Along the way, I’ve lost a couple of things (that I can retrieve, fairly easily). For instance, the current version isn’t as effective in dealing with differently-sized chords as one of the earliest ones. Shouldn’t be hard to integrate the old code into the new version.

Still… I have something that I can actually use. And many ideas for improvements.

One of the main things will be to convert my “prog gen” code to work in Plugdata. Shouldn’t be exceedingly hard once I figure out a format for transition counts that (Pd)Lua (or Plugdata itself) can process. Since LLMs typically don’t have the kind of data needed to work in a patching language, it might be easier to do in Lua, for now.
Once I have that, I’ll be able to have a continuously playing chord generator within a plugin patch.
The voiceleading algo is key to how satisfying this will be (and already is).

And that’s thanks to that Sonic Pi gist.

Cool!