Sonification of Climate Data

#1

A recent starter with sonic pi. I have little programming experience and I am not musical at all, but I love a challenge and that’s got to count for something.

I have daily data over 20 years for a location. It gives me maximum and minimum temperatures, solar radiation, and rainfall. In other words, four datapoints per day. I use excel to convert each set of datapoints to a selected spread of MIDI notes, and I play each of them within each beat, with a bit of delay built in to make them a bit distinguishable. It has done the job but not sounding very musical, as to be expected.

Things I am pondering at the moment.

  • I would like to make one of the temperature datasets play chords. I can see how that can be done with individual notes, but I haven’t yet worked out if it can be done with an array such as is already in the code. I know there are quite a few ways that don’t work.
  • It might be more interesting if the four notes were used to seed a couple of bars of something a bit more musical rather than just playing four notes each beat.
  • It would be good to be able to access the data directly through OSC, rather than trying to manipulate it and then inserting it into the code. That should give a great deal more flexibility.

Oh well. That is where I am up to. I thought it was going to take much longer to explain, but any comments or suggestions most gratefully received.

I’m going to have a go have another look at OSC now.

Welcome to Sonic Pi v3.1

use_bpm 30

in_thread do
in_thread do
in_thread do
#temperature max
use_synth_defaults release: 0.5, amp: 1, amp_slide: 0.5, stereo_width: 1
use_synth :piano

  #this line allows control the lag of the note within each beat. TODO try to achieve this through envelope
  play_pattern_timed [58],[0.25]
  
  play_pattern_timed [58,59,55,57,55,53,52,53,61,57,52,54,58,57],[0.5]
  play_pattern_timed [55,54,54,57,51,52,58,52,54,53,54,60,59,66],[0.5]
  play_pattern_timed [59,59,53,56,65,56,50,51,55,58,65,55,53,51],[0.5]
  play_pattern_timed [55,55,59,56,57,56,50,52,49,52,51,53,55,56],[0.5]
  play_pattern_timed [55,55,55,54,62,61,54,51,52,54,57,58,55,55],[0.5]
  play_pattern_timed [54,58,57,57,62,61,58,58,54,57,63,55,54,61],[0.5]
  play_pattern_timed [61,55,59,57,58,55,57,60,56,57,57,59,59,54],[0.5]
end
#temperature min
use_synth_defaults release: 0.5, amp: 0.6, amp_slide: 0.5, stereo_width: 1
use_synth :piano
play_pattern_timed [37],[0.0]
play_pattern_timed [37,41,36,34,36,36,35,35,35,37,34,32,34,38],[0.5]
play_pattern_timed [35,35,35,40,34,34,32,36,33,35,31,32,39,35],[0.5]
play_pattern_timed [36,36,37,36,37,37,32,31,35,36,40,36,32,30],[0.5]
play_pattern_timed [32,36,36,38,36,39,36,35,34,33,33,32,34,35],[0.5]
play_pattern_timed [34,35,36,37,40,41,38,38,37,36,39,40,40,40],[0.5]
play_pattern_timed [39,38,40,39,39,47,37,38,39,41,43,33,36,36],[0.5]
play_pattern_timed [43,34,37,36,38,38,36,43,39,37,35,36,40,39],[0.5]

end
#solar radiation
use_synth_defaults release: 0.7, amp: 0.15, amp_slide: 0.2, stereo_width: 1
use_synth :zawa
play_pattern_timed [58],[0.25]
play_pattern_timed [50,45,53,52,43,50,51,57,53,56,52,61,60,51],[0.5]
play_pattern_timed [56,53,59,43,44,52,61,43,54,50,57,55,62,49],[0.5]
play_pattern_timed [53,55,48,63,61,49,43,46,63,63,60,49,55,58],[0.5]
play_pattern_timed [61,58,60,53,62,52,45,50,47,54,52,59,62,61],[0.5]
play_pattern_timed [57,53,59,46,55,52,45,45,48,52,56,54,49,53],[0.5]
play_pattern_timed [53,59,58,57,63,55,59,58,49,46,53,64,55,63],[0.5]
play_pattern_timed [53,58,63,62,58,48,61,57,44,58,58,61,55,45],[0.5]
end
#rainfall
use_synth_defaults release: 2, amp: 1, amp_slide: 0.5, stereo_width: 1
use_synth :subpulse
play_pattern_timed [nil,nil,nil,nil,nil,nil,nil,nil,nil,nil,nil,nil,nil,nil],[0.5]
play_pattern_timed [nil,nil,nil,30,35,nil,nil,nil,nil,35,40,38,nil,nil],[0.5]
play_pattern_timed [30,32,34,32,32,nil,nil,nil,29,30,nil,30,32,30],[0.5]
play_pattern_timed [nil,nil,nil,nil,nil,nil,nil,32,nil,32,31,30,30,nil],[0.5]
play_pattern_timed [nil,32,34,36,34,nil,34,nil,nil,nil,nil,nil,32,32],[0.5]
play_pattern_timed [30,28,nil,nil,28,30,nil,nil,30,35,nil,nil,30,nil],[0.5]

1 Like
#2

I don’t have time for a very detailled answer but it’s really an interesting challenge. From what I can see so far in your code, there is a few improvements you can make to translate your meteorological data into some thing more musical.

When looking at the list of numbers that you entered, I see that all temperatures (or I don’t know what it represents) are always very average (between 36 and 40, between 40 and 60), etc… One thing that could make it more musical would be to dramatically increase the range by scaling the values. For instance, creating an input that takes data from 40 to 60 and scale it to be something between midi note 20 and midi note 127. You can also force every value to match with a particular note in a scale if you are affraid of randomness or if you don’t want to leave a tonal / modal space.

You can also add precision if you have floating numbers by converting each of your numbers into a frequency and not necessarily a note. It will not be tonal music, but you can reach very interesting sonic results (big evolving frequency landscapes). Most of your work woul be then to create interesting synthesis designs that can manage the diversity of the frequencies encountered.

I would also avoid to use play_pattern and play_pattern_timed because this structure is somewhat difficult to manipulate. I would prefer using rings containing variables, so you can modify on the fly the content of your ring. I imagine something like:

play (ring temp_year1_day1, temp_year2_day1).look, something like that.

This way, if you script something that sends data using OSC, you could have a variable that would update each time a new values bang in your buffer without having to write anything. I think that when working on datasets like yours, lazy solutions are the best solutions.

PS: Try to search on the internet how to convert your Excel spreadsheets into something that a programming language can handle very easily. Python may be one way to start, Ruby too if you don’t want to leave the Sonic-Pi background language. There must be a ton of different ways to do it.

#3

This is a really fun idea to play around with. Thanks for sharing.

I’ve been interested in experimenting with data sonification and while looking into it, I came across this video called Sonification and the Problem with Making Music from Data
There is a certain level of cynicism and sarcasm throughout but the point being made is a valid one. It is essentially saying that the choices made when we try to sonify data are basically arbitrary and any number of musical elements can be used to represent a given data set.

Keeping this in mind, I think there are several options as to how you can use the values in your data sets to guide the music you want to make with it beyond just representing each value as a midi note.

  • I would like to make one of the temperature datasets play chords. I can see how that can be done with individual notes, but I haven’t yet worked out if it can be done with an array such as is already in the code. I know there are quite a few ways that don’t work.

I found a way to make this work although it might not be what you had in mind. Looking at the data set for the min temperatures, I noticed that there was a total of 14 different values

minTemps = [30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 43, 47]

So the total number of values became my focus instead of what each value was specifically. Being divisible by 7, each value’s index in the array could represent a chord degree in a diatonic chord progression stretched over two octaves.

To do this, I used the chord_degree function and set the degree by converting the value in the data array to the index of that value in the minTemps array. For index values that were over 7, I included a conditional which would turn it into a value between 1 and 7 to work in the chord_degree function and then add an octave to give it something to differentiate it from the other 1-7 values. I got this idea from the beginning part of this post.

I then just ticked through all the values in each array.


data = [
  [37],
  [37,41,36,34,36,36,35,35,35,37,34,32,34,38],
  [35,35,35,40,34,34,32,36,33,35,31,32,39,35],
  [36,36,37,36,37,37,32,31,35,36,40,36,32,30],
  [32,36,36,38,36,39,36,35,34,33,33,32,34,35],
  [34,35,36,37,40,41,38,38,37,36,39,40,40,40],
  [39,38,40,39,39,47,37,38,39,41,43,33,36,36],
  [43,34,37,36,38,38,36,43,39,37,35,36,40,39],
]

minTemps = [30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 43, 47]

  data.length.times do |d| # iterate through each array in the data araay
    data[d].length.times do |i| # iterate through each value in each array
      root = minTemps.find_index(data[d][i]) + 1 # Assigned index value to each value in data arrays. + 1 is to avoid the value of 0 at the beginning of the array which will not work in the chord_degree function
      oct = 0 # Set variable to increase oct for index values above 7
      if root > 7 # conditional for index values above 7
        root = root - 7
        oct = 12
      end
      play (chord_degree root, :a2 + oct, :major), amp: 2 #chord_degree function with root variable returning values between 1 and 7 
      sleep 1
    end
  end

It works pretty well and since it is all diatonic, it sounds more “musical” than just using the values as is.

I then tried to add some melodic content with the Max temp data. Keeping with the mindset that these data values could be used for any type of musical element, I decided to keep things diatonic by generating a 4 note melodies from a corresponding mode (Dorian, in this case) for each data value. I used the data values with use_random_seed to determine which 4 note pattern would be generated. This adds some consistency since we will hear the same pattern each time we get the same value.

data2 =[
  [58],
  [58,59,55,57,55,53,52,53,61,57,52,54,58,57],
  [55,54,54,57,51,52,58,52,54,53,54,60,59,66],
  [59,59,53,56,65,56,50,51,55,58,65,55,53,51],
  [55,55,59,56,57,56,50,52,49,52,51,53,55,56],
  [55,55,55,54,62,61,54,51,52,54,57,58,55,55],
  [54,58,57,57,62,61,58,58,54,57,63,55,54,61],
  [61,55,59,57,58,55,57,60,56,57,57,59,59,54]
]

with_fx :reverb, room: 0.75 do
  data2.length.times do |d|
    data2[d].length.times do |i|
      use_synth :tri
      use_random_seed data2[d][i]
      4.times do
        play scale(:b4, :dorian).choose, amp: 0.5
        sleep 0.25
      end
    end
  end
end

I then put it all together using an in_thread to run both data sets concurrently.

data = [
  [37],
  [37,41,36,34,36,36,35,35,35,37,34,32,34,38],
  [35,35,35,40,34,34,32,36,33,35,31,32,39,35],
  [36,36,37,36,37,37,32,31,35,36,40,36,32,30],
  [32,36,36,38,36,39,36,35,34,33,33,32,34,35],
  [34,35,36,37,40,41,38,38,37,36,39,40,40,40],
  [39,38,40,39,39,47,37,38,39,41,43,33,36,36],
  [43,34,37,36,38,38,36,43,39,37,35,36,40,39],
]

minTemps = [30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 43, 47]

in_thread do
  data.length.times do |d|
    data[d].length.times do |i|
      root = minTemps.find_index(data[d][i]) + 1
      oct = 0
      if root > 7
        root = root - 7
        oct = 12
      end
      play (chord_degree root, :a2 + oct, :major), amp: 2
      sleep 1
    end
  end
end

data2 =[
  [58],
  [58,59,55,57,55,53,52,53,61,57,52,54,58,57],
  [55,54,54,57,51,52,58,52,54,53,54,60,59,66],
  [59,59,53,56,65,56,50,51,55,58,65,55,53,51],
  [55,55,59,56,57,56,50,52,49,52,51,53,55,56],
  [55,55,55,54,62,61,54,51,52,54,57,58,55,55],
  [54,58,57,57,62,61,58,58,54,57,63,55,54,61],
  [61,55,59,57,58,55,57,60,56,57,57,59,59,54]
]

with_fx :reverb, room: 0.75 do
  data2.length.times do |d|
    data2[d].length.times do |i|
      use_synth :tri
      use_random_seed data2[d][i]
      4.times do
        play scale(:b4, :dorian).choose, amp: 0.5
        sleep 0.25
      end
    end
  end
end

There are plenty of ways this could altered. To bring it back to the point from that video, the data can be anything musically. I think the great thing about Sonic Pi and coding music in general is that pretty much every function takes a number, so a data set has so many possibilities as to how it can affect the sonic qualities of the music. We should try to be imaginative and not feel that there is a “correct” way to represent the data through music. Since the data is not the real sound of the thing it represents, it ultimately is going to be influenced by our own human bias. As long as we are aware of that, we should just have fun with it and see what we can make. At least that’s how I’m approaching it.

#4

Thanks so much for that. That sounds gorgeous, and fits well with moving away from just a bare representation of the raw data. Almost brought tears to my eyes. I don’t understand the detail of a lot of what you wrote, having no musical background, but I think I understand the principles. I had already manipulated the raw data to confine the notes within a certain range, and to provide a bit of a distinction between the datapoints representing different things, but I can see the advantage of doing that programmatically within the code rather than “preprocessing” it in a spreadsheet.

Thanks again, so much to learn.

Phil

1 Like
#5

Thanks for your comments and advice. I can see the issue with these data points not having a dramatically wide range, and it is probably a bit more constrained than it needs to be as I was trying to create a distinction between each dataset. And because this is only a short period of the year represented in this data the full dataset has a wider range. They each have around a thirty note range which approximates a 30 degree C range in the data. It could easily be made broader. I think if I can do all that manipulation in code then I would be better to leave the original data as is.

I really like your idea of turning each datapoint into a frequency and experimenting with that. I think I read that even the midi note numbers will take decimals and generate “inbetween” notes. Is that the same or similar as using frequencies?

I used play_pattern_timed as it was the first thing in the tutorial that offered a way to play a set of notes and I jumped in, so thanks for pointing me in the right direction. I’d seen ‘ring’ in the Lang help but hadn’t twigged as to how I could use it.

I’m going to have a look at OSC now.

Thanks again
Phil