Recently there have been some discussions on the usage of Markov chains in SPI programs. However, the discussions have been on a technically high level from the beginning and basically within a small group of experts. Therefore, I would like to do a basic educational contribution explaining what a Markov chain and matrix is and why this is useful for SPI programmers.

The motivation to use Markov chains is to reduce the randomness of fully random functions such as `rand, choose, shuffle, pick,...`

that basically are based on equal probabilities for all choices they make. It is possible to give the randomness a âcolorâ using the command `use_random_source`

, but still it is not possible to really give the random process a purposeful direction.

This is what Markov processes do. The name stems from a russian mathematician who was born in the 19th century. In a nutshell, using Markov chains it is possible introduce probabilities for a sequence of notes or chords, where there is a progression in a certain direction, but still has some randomness.

Iâll explain this concept using two examples. The first is a sequence of notes that is purely random and most of you have used such randomness in your programs many times.

```
use_synth :tri
my_scale = (scale :C3, :ionian)
live_loop :pure_random do
sca = my_scale.shuffle
sca.each do |n|
play n
sleep 0.5
end
end
```

The `shuffle`

command is used to generate a random sequence of notes from a given scale. Listen to it and you will hear how pure randomness sounds like. Mathematicians would call that a âwhite noise random processâ.

Now lets give the random process an upward direction, not strictly but with some randomness remaining. First we construct a matrix that contains the probabilities we would like to use:

```
markov_matrix = [
[0.1, 0.8, 0.1, 0.0, 0.0, 0.0, 0.0, 0.0],
[0.0, 0.1, 0.8, 0.1, 0.0, 0.0, 0.0, 0.0],
[0.0, 0.0, 0.1, 0.8, 0.1, 0.0, 0.0, 0.0],
[0.0, 0.0, 0.0, 0.1, 0.8, 0.1, 0.0, 0.0],
[0.0, 0.0, 0.0, 0.0, 0.1, 0.8, 0.1, 0.0],
[0.0, 0.0, 0.0, 0.0, 0.0, 0.1, 0.8, 0.1],
[0.1, 0.0, 0.0, 0.0, 0.0, 0.0, 0.1, 0.8],
[0.8, 0.1, 0.0, 0.0, 0.0, 0.0, 0.0, 0.1],
]
```

The interpretation is as follows. If we are playing the first note of the scale `:C3`

, then this note has got the index `0`

in the scale: `:C3 = my_scale[0]`

. The probabilities for the notes to follow are contained in the row of our `markov_matrix`

that has got the same index `0`

: `markov_matrix[0]`

. If you read the first row, you will find the propability `0.1`

as the first entry. This means: choose as next note the one with index 0 with a probability of `0.1`

. In other words: play the same note again with 10% probability. The next entry in the row has got the value `0.8`

. That means, play note with index 1 (`my_scale[1] = :D3`

) with a high probability of 0.8. So, if we are playing a `:C3`

, then a `:D3`

will follow with probability 80%. And so on.

If you compare the rows of the matrix, you will recognize a pattern. No matter which row number `n`

we take, the probability to play as next the note `my_scale[n+1]`

is always given by 0.8. There are no probabilities given for stepping back a note, only probabilities for playing the same note again (0.1), playing the next note (0.8) and playing the second next note (0.1). This will produce a raising sequence of notes, but not strictly and subject to some randomness.

The rest of the program is using some technique on how to use the built-in function `rand`

to actually get the next note with the probabilities specified in the Markov matrix. In case you have some question on how exactly the function `next_note_idx`

works, just ask.

Here is the complete Markov chain program. Listen to it and compare it to the pure random program. What you will hear is what mathematicians call a Markov process.

```
# Markov chains for beginners
use_synth :tri
my_scale = (scale :C3, :ionian)
markov_matrix = [
[0.1, 0.8, 0.1, 0.0, 0.0, 0.0, 0.0, 0.0],
[0.0, 0.1, 0.8, 0.1, 0.0, 0.0, 0.0, 0.0],
[0.0, 0.0, 0.1, 0.8, 0.1, 0.0, 0.0, 0.0],
[0.0, 0.0, 0.0, 0.1, 0.8, 0.1, 0.0, 0.0],
[0.0, 0.0, 0.0, 0.0, 0.1, 0.8, 0.1, 0.0],
[0.0, 0.0, 0.0, 0.0, 0.0, 0.1, 0.8, 0.1],
[0.1, 0.0, 0.0, 0.0, 0.0, 0.0, 0.1, 0.8],
[0.8, 0.1, 0.0, 0.0, 0.0, 0.0, 0.0, 0.1],
]
define :next_note_idx do |current_note_idx|
n = 0
r = rand
pp = 0
markov_matrix[current_note_idx].each do |p|
pp += p
break if pp > r
n += 1
end
return n%8 # avoid numbers > 7
end
n = 0 # initial index of note in scale
live_loop :markov do
n = next_note_idx(n)
play my_scale[n]
sleep 0.5
end
```

Play with the probabilities to try chains that also are allowed to make backward steps or larger jumps. Just make sure that all numbers in a given row sum up to `1.0`

. Thatâs a mandatory requirement for Markovian matrices, because there should be a next note to play with probability `1.0`

(otherwise the process may be caught in a never ending loop without playing something).

If you put single `1.0`

probabilities in a row, then you can actally force a certain note always following a certain previous note.