Creating a Pitch Tracking SynthDef


#1

Hi All,

I’ve been teaching an introductory computer music course for a couple of weeks now and some of the students are interested in pitch tracking audio input. It doesn’t appear that there is something similar to the fiddle~ object from Pure Data / Max in Sonic Pi, and so I was looking at creating a SuperCollider SynthDef to do pitch tracking. I’ve gotten as far as making a basic SynthDef in Supercollider that maps the detected pitch of audio input to a sine tone, and I’ve compiled the synth def and loaded it into sonic pi using load_synthdefs, but… there’s no output.

I think I’m hung up on how to get audio in piped into this synthDef possibly? I don’t have much experience in SuperCollider, so maybe I’m missing something else also, tbh. Any help with this would be hugely appreciated!

Here’s the SC code:

(

SynthDef(\pitchTracker,{

var env, in, freq, hasFreq;



// the audio input

in = AudioIn.ar(1); 



// the pitch variable and the hasFreq (Pitch.kr returns a list like this [freq, hasFreq])

# freq, hasFreq = Pitch.kr(in, ampThreshold: 0.2, median: 7);



// when the hasFreq is true (pitch is found) we generate a ADSR envelope that is open until

// the hasFreq is false again or the amplitude is below the ampThreshold of the Pitch.

env = EnvGen.ar(Env.adsr(0.51, 0.52, 1, 0.51, 1, -4), gate: hasFreq);



// we plug the envolope to the volume argument of the Sine

SinOsc.ar(freq, 0, env * 0.5) ! 2



}).writeDefFile("/Users/username/Desktop/synthDefs") ;

)

Ideally, this pitch tracker would be able to output the pitch data that’s accessible inside the SynthDef to Sonic Pi and not just play audio. It would be useful to have a function like this that you could route an audio stream into and get the fundamental frequency as output data that could be plugged in elsewhere in Sonic Pi.

Thanks for any help or thoughts on this!


External sound used to change variables?
#2

I’d love to get something like this working. My first thoughts are that it might make sense to model this as an “effect” e.g.

with_fx :pitch_tracker do
  foo
end

Then inside the synthdef, pass the audio through (unaffected) but then send an OSC message to a specific address each time the pitch changes using something like SendReply I don’t know about the specifics of how that would work yet but that’s how I’d try to implement it.

Then to get the result within Sonic Pi you could use the osc method to receive the message. If it works well we could look at wrapping it in a more convenient method for inclusion in the main app if needed.

Let me know what you think!


#3

Doing it as an effect is a much better idea, thank you!

I’m going to spend some time with this soon and will circle back.

Until then ~


#4

I had a go at this and got something working:

Sonic Pi code is shown below. What you can hear is the pitch tracker following a sine wave (produced by play) and triggering a saw with the frequency rounded to the nearest midi note 10 times a second. The 10 times a second is determined by the checksPerSecond arg in the synthdef.

I’d be interested if anyone has any feedback on how they’d like to use a pitch tracking synth - what are the priorities? e.g. accuracy? quantized tuning by default? How do you see it working in practice etc.

For anyone who wants to play with it, this is the SuperCollider synthdef

(
a= SynthDef("sonic-pi-fx_pitch_tracker", {
	arg checksPerSecond=10,
	    out_bus=0,
	    in_bus=0;
	
    var in, freq, confidence;

    in = In.ar(in_bus, 2);

    # freq, confidence = Tartini.kr(in);
    // pitchclass = ((freq.cpsmidi.round(1.0))%12);
	
	SendReply.kr(Impulse.kr(checksPerSecond), 
		'/scsynth/pitch', 
		values: [freq, confidence]);

	Out.ar(out_bus, in);
    //Out.ar(0,[SinOsc.ar(freq,0.1),in]);
}).writeDefFile("/Users/xriley/Projects/sonic-pi/etc/synthdefs/compiled/");
)

This is the Sonic Pi buffer

live_loop :pitches do
  pitch_info = sync "/scsynth/pitch"
  n = hz_to_midi(pitch_info[2]).round
  synth :dsaw, note: n, release: 0.1
end

with_fx :pitch_tracker do
  live_loop :tempo_test do
    play range(60,72).choose
    sleep 4
  end
end

This is the updated that you’d need to make to app/server/ruby/lib/sonicpi/synths/synthinfo.rb in order to pick up the newly compiled synth:

diff --git a/app/server/ruby/lib/sonicpi/synths/synthinfo.rb b/app/server/ruby/lib/sonicpi/synths/synthinfo.rb
index d70c85199..139210bfb 100644
--- a/app/server/ruby/lib/sonicpi/synths/synthinfo.rb
+++ b/app/server/ruby/lib/sonicpi/synths/synthinfo.rb
@@ -4942,6 +4942,50 @@ A decent range of Q factors for naturally sounding boosts/cuts is 0.6 to 1.
       end
     end
 
+    class FXPitchTracker < FXInfo
+      def name
+        "PitchTracker"
+      end
+
+      def introduced
+        Version.new(3,2,0)
+      end
+
+      def synth_name
+        "fx_pitch_tracker"
+      end
+
+      def doc
+        ""
+      end
+
+      def arg_defaults
+        super.merge({})
+      end
+    end
+
     class FXMono < FXInfo
       def name
         "Mono"
@@ -7606,6 +7650,8 @@ Use FX `:band_eq` with a negative db for the opposite effect - to attenuate a gi
         :fx_replace_reverb => FXReverb.new,
         :fx_level => FXLevel.new,
         :fx_mono => FXMono.new,
+        :fx_pitch_tracker => FXPitchTracker.new,
         :fx_replace_level => FXLevel.new,
         :fx_echo => FXEcho.new,
         :fx_replace_echo => FXEcho.new,

#5

Some additional thoughts on this:

a) Polling for events every n seconds is how SuperCollider suggests that this is used, but it would be nicer if the synthdef could “filter” out events where the note hasn’t changed. This would effectively give you a sync for each new pitch (detecting repeated notes (two Cs in a row for example) is tricker because that would require onset detection - here be dragons!)

I think we could achieve this using the Lag UGen but I’d need to look into it more.

b) This only works if you have one :pitch_tracker effect at a time - otherwise they’ll get into a “pitch fight” where the incoming events will start piling in on top of each other. Might be a cool effect but not ideal if you didn’t know that it would happen. In reality, will Sonic Pi users need to track more than one pitch? Is it a dealbreaker? etc.

c) To what extent do we want to quantize the return value? The nature of the tracking means it’s slightly off if you don’t round the frequency to the nearest note. Octave jumps where it can’t decide between one octave or another also appear to be common. It might be possible to smooth these out but it would likely come at the expense of some accuracy


#6

It would be cool to use pitch_tracker as a basis for a sonic-pi autotune. At the very least, it would require a pitch_shift fx too, for example to wrap both of these around a live_audio.

Aside from live_audio, it would also be cool to be able to wrap them around a sample of some sort and use note, velocity = sync "/midi/..." to control the pitch of the sound from the sample with a keyboard.


#7

All those things sound fun.

Please consider supporting me on Patreon, so I can be in a position to work on things like this going forward: https://patreon.com/samaaron


#8

All this sounds like it’s almost exactly what I’ve been looking for over the past few months. I’m trying to take a live_audio stream that might be absolutely anything and convert that stream into data that can then be manipulated in Sonic Pi in a number of ways. I’m especially interested in several aspects of the sound including volume, pitch, frequency of sound at specific levels etc which can then be taken and used to choose or force a choice in a variety of ways.

I mainly use Sonic PI to control a series of external synths through MIDI so an example might be to give an external synth a note to play and when the live_audio stream reaches a specified volume level that note changes to another pre-defined value. It wouldn’t necessarily be pitch tracker, but would have changes linked to the live_audio stream so that changes are made dynamically without manipulation by me.

I’m not an expert at either Sonic Pi or Supercollider but what is being discussed here appears to be getting very close to what I’m after. If anyone can help out or nudge this idea a step closer to completion I would be very grateful and if I can get it all working I’ll invite you all to the first performance of what will hopefully be an interesting experiment in sound.

Thanks.