Rhythmic consonance?

Haha, true, I’d probably be making use of the just tuning provided if I were to go all the way down that path.

For now however, I only plan on using polyrhythms for rhythm, going nowhere near a 20Hz tempo, and keeping my space-domain (i.e. frequencies) distinct from my time-domain (i.e. rhythms).

1 Like

Recently I have been investigating (and experimenting) the concept of “harmonic polyrhythms”, called also “Just Intonation polyrhythms”; that’s basically what is described by the video that @Martin posted.
You can watch a short video introduction in this video byt the same author: Harmonic PolyRhythms.

As far as I know, I don’t think there is a deep formalized theoretical study but you can find some introductory informations at XenRhythmic.

1 Like


me again. Interestingly I first encountered these ideas around the transition from rhythm to pitch, time to space in Stockhausen’s Four Criteria of Electronic Music. He there takes somehow another route but the central idea of a continous universe where timed events transform into rhythm transform into pitch is already a central part of his thinking.

(To not arouse the suspicion of being ingorant of the historical dimension: of course the Stockhausen text is already around 50 years old and also Neely refers to several historical sources so the exact chronology of the appearance of these ideas remains in the dark for me.)


It’s a great talk, apart from he’s using Ableton :wink:

I was a physicist originally so I can say that the idea of using oscillations to describe all manner of things is everywhere. A standard toolkit, but with all kinds of fascinating connections. Let me throw these to expand on the harmony of the spheres thing…

What I always like is when people talk about the beauty of the maths of musical system (that’s maths not math btw)…oooh ah-ha it all fits so beautifully…and then skip over the irratating business of the pythagorean comma. Almost, but not quite, as he says - but let’s not bother about that. Why let a little detail spoil such a lovely idea? :smile:

I think you should build that detail into your system. How about a 2:3 rhythm where one of them is slightly off? There are all kinds of temperaments not just ‘just’ and ‘equal’ - a whole world to explore.


As soon as I read sligthly off I immediately thought of the works of Steve Reich.
Piano Phase Visualization. :smiley:

1 Like

I did a project some time ago based on this polyrhythm generator


I’m going to explore each of these resources you’ve all provided and have something thoughtful to say! (I am a fan of the Stockhausen I’ve heard, definitely a big fan of Steve Reich, and yes, I see that math and reality deviate significantly enough. (And yay physics! I was in undergrad mechatronics engineering for 2 years before I dropped out, and I’ve been interested in physics from since I was a child.))

An update:
In order to work with the constraints that my program currently needs to coordinate the system every measure to maintain it as it evolves, and also that ticks are the finest unit resolvable in my performance implementation, I’ve struck a compromise with Euclidean rhythms. I’ll paste some notes here. I think it’s helpful to have the space-domain to compare to, and I’ll also define “system” for my program.

And for an overarching vision for my program, here are my opening thoughts (bear with the one new word in my entire report):

I really like where this discussion is at, because one can easily imagine a freer general system, of which this restricted system is a special case. Those beat counts in my time-domain for instance? They could be evenly spaced. (Which would yield perfect ratios however.)

Fundamentally, there’s a divide between a discrete digital approach (which I’ve chosen) and a continuous analogue approach. Keeping with the digital approach, if I expanded my space-domain to include every Hertz from 20-20000 and my time-domain to be the least common multiple of every number up to say 1024, I could approximate analogue phenomena like actual rhythmic consonance. (I would prefer to use integers because they’re exact.) By that time, the two domains would merge. Honestly, it would be another project :sweat_smile:

Thanks for starting this thread. In terms of the research that’s out there I think it’s fair to say that a lot of these ideas are good starting points for exploration but there’s not much consensus on a theory of rhythm that’s totally rooted in how we perceive things. That’s a fancy way of saying - take everything with a pinch of salt! :slight_smile:

Some things to be aware of - Euclidean rhythms (from the work by Toussaint and others) are already implemented in Sonic Pi as the spread function. Lots of mileage in there for making cool rhythms.

In terms of your original question about rhythmic “consonance”, I think that’s a nice way of framing it. My personal research is leaning towards the idea that it’s not necessarily about integer multiples (e.g. 3:2, 5:4 etc. like Adam Neely refers to) but more about the general idea of how long a pattern takes to repeat itself (called the fundamental period in signal processing). The latter definition is a bit looser and accommodates a wider variety of rhythms that sound appealing.

Another thing to think about with rhythm is to classify it into “high” and “low” states (e.g. bass drum, snare drum) which is described in this paper ISMIR 2020: Bistate Reduction and Comparison of Drum Patterns ISMIR is the conference for these kinds of ideas and questions so I’m sure you’ll find stuff there to keep you busy for years.

Finally there’s a probabilistic interpretation of rhythm by David Temperley in his book Music and Probability Again, this is a nice model but it’s not the final word on rhythm by any means

Hope those are useful!


I had been aware on some level of the ‘fundamental period’ for a while now - nice to know that there’s a name for it :sweat_smile:

1 Like

Thanks for joining the discussion! Ah yes, I’m aware of the spread function, which is how I plan to implement Euclidean rhythms.

My initial attempt at quantifying rhythmic “consonance” was to multiply the terms of the ratio, with lower products corresponding to simpler and thus more consonant rhythms. Your latter definition is interesting! The same product (for a ratio reduced to lowest terms) gives us the period if I’m not mistaken, which should be longer for a more interesting rhythm? I think there’s a sweet spot (and range) for this product, if my initial thinking here is correct.

Will explore these resources! Thanks! :smiley:

Another aspect of these questions is culture .
The best example I can think of is the Gamelan
scale . I am sure it is pleasent to play and listen
to with the proper backround , but rather discordant
to others .

1 Like

Gamelan music scales

Gamelan scales, known as Slendro and Pelog, are definitely fascinating.
They have a lot of regional tonal differences but Slendro usually have 5 tones, Pelog 7 instead.
IIRC, they’re used in different setting/rituals/times of the day.

1 Like

Beware any number larger than 5 .
The number of digits on one hand
is a fundamental limit in our
thinking ( feeling )

1 Like

The same product (for a ratio reduced to lowest terms) gives us the period if I’m not mistaken

Yes I think that’s right. I think the mathematical term is “least common multiple” - so for a 5:4 rhythm it will have a fundamental period of 20 before it repeats.

I should also add that a signal doesn’t have to be 100% periodic to still have a fundamental period - I think that’s where this becomes interesting for rhythm because you can accommodate some amount of randomness or variation while still referring to something with a definite cycle. The question is, how random can you make it before it changes the character of the rhythm? Also, does it make a difference where the randomness occurs e.g. off beats vs on beats? (intuition says it probably does)

If we jump back to the audio waveform domain for a sec, it’s fairly obvious from looking at the waveform of a guitar playing a single note that there’s a lot of chaos and variation in there but in terms of what we hear (ie the pitch of the note) there’s something periodic and repetitive about that which our ears can latch onto. A simple way to extract that is to look at how the zero crossings of the waveform are based where it should be easier to see a pattern.

My thoughts on all this aren’t fully formed btw, I’m really just throwing stuff out there to stimulate a discussion :slightly_smiling_face:

Regarding Gamelan, culture etc. it is indeed a massive component of what we as individuals are drawn to for harmony and rhythm. That said, I’m of the belief that we are all “wired for music” on a basic level (there’s a decent amount of studies to support this idea) and so there are some fundamental or universal building blocks of music appreciation that we should try to explore. Trying to come up with computational models or algorithms is a good test of this hypothesis - if we can make something that produces widely appealing rhythms then there’s probably something there. Sonic Pi is as good a tool as any to explore these things with!


I’ve always gone by the terminology “irrational rhythm” as a lingua franca to communicate with other musicians regarding tuplets that don’t divide by 2. If “polyrhythm” emphasizes stacking different tuplets vertically as an analog to pitch harmony, then playing tuplets sequentially in different arrangements would be the analog of melody. For the sake of communication, I’ve always gone by the terminology “temporal modulation” here. Two rhythmic “melodies” happening at the same time then have both a harmonic (polyrhythmic) and a contrapuntal (melody vs. melody) relationship; it’s just a matter of how you want to color things, want to bring things out in your music, etc.

For example, we call Bach contrapuntal (four simultaneous beefy melodic voices) even though his harmonic syntax always makes good sense, and we call Beethoven homophonic for greatly reducing the emphasis on melody to one lead voice and 2-3 “lighter,” accompanying ones for the sake of gestural elements that would obliterate too many melodies. Regardless, you have harmony and melody going on all the time (even with a solo instrument’s overtones), and it’s just a question of what you want to do with them, what sounds good to you.

There are lots of examples of particularly good music emphasizing temporal modulation, so I’ll just recommend these classics and this wonderful operatic piece. Also, a temporal modulation module recently came out for Eurorack, and it’s pretty cool to interface its CV with MIDI gear. This guy does some nice demos. I was feeding Flux’s rhythms in MIDI format into SPI, which works quite nicely. The Flux module excels in CV gates (rhythm), but is rather lacking in melodic capacity. What’s more, I think temporally modulated music really wants to go with microtunings other than 12 equal divisions of the octave, so I thought it would be nice to implement my own temporal modulation + microtuning tool in SPI that I can use to drive my beautiful-sounding MIDI synths in live performance. I’m just getting that together now, as my rustiness at programming slows down development.


Ah, rhythmic melody! Yes, I’ve been wondering about the sequential permutations of rhythmic units, and it may be the missing piece for a gap I’m mulling over. At the beginning of this discussion, I brought up how 16 ticks (each corresponding to a 16th note) could be grouped and how the groupings could be permuted, e.g. [6,6,2,2], [8,4,4]. It’s a constricted rhythmic melody which doesn’t delve into “irrational rhythms”, but I think it’s the same idea. Most of the resources shared up to this point have been about steady rhythms. I want to explore “melodic” rhythms next.

I think the “counterpoint” of two rhythmic melodies would involve their syncopation. The mathematical procedure I’ve stumbled upon for determining syncopation is to compare the partial sum series of the two rhythms. E.g. [6,6,2,2] has a partial sum series of (6,12,14,16) while [8,4,4] has a series of (8,12,16). Because the smaller series is not a subset of the larger series, the two rhythms can be said to syncopate. (Also, their composite rhythm has partial sums of (6,8,12,14,16), corresponding to a rhythm of [6,2,4,2,2].) I think the independence of two rhythms depends on the degree to which they syncopate.

I think the bridge between melodic and harmonic rhythm has to do with periodicity. That’s what I’m mulling over at the moment.

An update by the way (addressing the thread):
After reading Stockhausen and exploring these resources, my horizons have broadened. What I previously called a “note” I’ve generalised as a “phoneme”, and what I called an abstract “motif” is now an abstract “chronomorph”, literally a shape in time. I think the theory suggests something far more ambitious than the code. After all, the implementation is merely a 4/4 tonal music generator :grimacing:


This thread has gone way beyond my understanding, but I’m enjoying the music links. I’ll just say that playing polyrhythms on a drum kit are a lot of fun, especially when you’re chugging away and your own sense flips between one frame and the other. Like one of those optical iilusions where you flip between seeing a vase and two faces.

In that Stockhausen Samstag aus Licht piece at least he thoughtfully puts in an anvil (or clanging bit of metal whatever it is) to give a clue to the time. Is that the fundemental period you are on about?

1 Like

That piece sounded good to me. I paid attention to the composer’s manipulation of time at different scales to get form and rhythm, as he discussed in the first of his four criteria.

The real-life performance aspect of polyrhythm bewilders me. I think Neely mentioned a performer gets a feeling for the composite rhythm, because otherwise I have no clue how one gets to multi-task like that. I guess it’s easier for steady rhythms. Anyways, your perspective on things is much appreciated! :smiley:

With regards to the fundamental period, I think it refers to the period of a composite rhythm. Thus, a 3:2 composite rhythm would have a fundamental period of 6 beats. I didn’t really get a feeling for polyrhythm in that Stockhausen piece, except to notice the agitated passages.

It’s true, so actually you feel the three things - the two individual rhythms and the composite. Try the first movement of for size https://youtu.be/wcMYxDMuM4c I couldn’t play it but did enjoy unpicking some of the phrases. If you thought that Neely clicking his fingers in polyrhythm was tricky…


For rhythms like Stockhausen’s and others, players do learn it by counting it out, then endlessly practice until they get a good enough feel for it so that it’s practically memorized, and they don’t have to depend on the score except for quick reminders. Remember, those orchestra folks essentially dedicate their entire lives to playing one instrument and don’t get time to write music with cool things like SPI . Then they rehearse endlessly in small groups before the whole ensemble combines for full rehearsals. Add in those tempo change markings, and some stuff simply can’t be played without the players having earphone click tracks. That takes some of the aesthetic impact out of it, I guess, but pop bands also depend on in-ear monitors to stay in sync. Here, we’re depending on SPI to help us play stuff that exceeds our skills or number of hands, but the more you find out how stuff works and sounds, the more easily you can get to or discover the things you want to hear and still have time left over to go to work and make money.

Without a score to look at, I don’t think it’s possible to identify just what’s going on temporally or otherwise in Lucifer’s Dance, let alone identify a fundamental period. Like any music, it either speaks to you to one degree or another, or not. You listen a couple of times, and then you’re listening for that part you particularly like (I’ve always loved from around 17:50 to 21:20—the drum kit is Lucifer’s nose), then more of it grows on you, or perhaps never does. Pop music consumption works pretty much the same if you ask me. With SPI, we can try stuff out quickly to see what we like without having to have concert pianists’ skills or players dedicated to performing for us.

Oh yeah, Stockhausen usually has a visual or theatrical element going on, and I think it would be cool to coordinate SPI with something like Resolume.