With_fx :record + live_audio looping - bare bones

I’m still a neophyte on the programming side of things, so it’s taken me some time to get with_fx :record working. I’ll post this example for my past self and anyone new looking for a quick start. It seemed like such a useful and simple tool, and maybe those fluent in SP don’t need such a simplistic example to play with. But I find these little blocks more instructive, so here’s as whittled down as I could get it. (Thanks to @robin.newman and @Martin for examples to learn from). Also some questions below the fold…

    bname=("moe") 
    with_fx :record,buffer: buffer(bname,8) do 
      live_audio :foo
    end

Then at some later point in performance-time call up the .wav and live_loop it:

live_loop :foo do
  sample "C:/Users/12082/.sonic-pi/store/default/cached_samples/moe.wav"
  sleep 8
end

So I have a bunch of questions that might have been answered in the tutorials but I can’t find anywhere. I’d note that I’m hoping to incorporate SP into my teaching next year at the middle school level so am trying to get to get a handle on both the coding itself and the ability to explain certain aspects of it, at least to 13 year-olds:

-Is the above code the most optimized syntax for recording snippets for a loop in a live coding performance? (setting aside sampling precision, syncing, multiple sample workflow, etc. for the moment)

-What is the functional/conceptual relationship between these little b buffers (as they spit out the .wav files) and the eight Big B Buffers under the code editor window?

-What is the logical/language reason for the second ‘buffer’ in the line? Are there non-buffer buffers?
…buffer: buffer(buffername)

-What was the rationale for treating recording as an ‘effect’?

-Is it possible to change the default folder for saved .wav files to something else?

I recognize some answers to these questions might be somewhat technical, but would be interested all the same.

Thanks for any thoughts!

1 Like

Hi there,

sorry, but I’ve still not figured out how to make this stuff easier than it currently is. I’m really not happy with the amount of complexity necessary to record and manipulate audio buffers.

Is the above code the most optimized syntax for recording snippets for a loop in a live coding performance? (setting aside sampling precision, syncing, multiple sample workflow, etc. for the moment)

Looks good, although you don’t need to know the physical path of the audio buffer, you can simply pass it to the sample function itself: sample buffer[:foo]

What is the functional/conceptual relationship between these little b buffers (as they spit out the .wav files) and the eight Big B Buffers under the code editor window?

No relationship other than they are both data structures stored in memory. In the upcoming release I am removing the. word ‘buffer’ from the code editor :slight_smile:

What is the logical/language reason for the second ‘buffer’ in the line? Are there non-buffer buffers?
…buffer: buffer (buffername)

Apologies, I'm not sure what you mean here.

What was the rationale for treating recording as an ‘effect’?

Synths are things which spit out a stereo feed. FX are things which take a stereo feed in and spit out a stereo feed. Given that recording requires an input feed, it maps more closely to Sonic Pi’s FX abstraction.

Out of interest, what would you have liked to write in terms of code?

Is it possible to change the default folder for saved .wav files to something else?

Not at this stage, sorry. This stuff is very edge functionality and definitely needs more work and polish all round :slight_smile:

1 Like

Thanks for taking the time to respond!

So if I understand right, there’s no need to have the capability to edit the default path to the .wav file because that work can be done completely within SPi . I was adding an unnecessary step trying to organize things on the Windows side of it.

Thinking about what I would have liked to write, I certainly don’t have any great alternatives, but I’d note that a lot of my initial confusion in learning this tool came from the multiple use of terms in different ways: “Record” is both a global function at the top of the screen to capture the whole circus, as well as the name of a smaller operation within a block. Buffer is the name of the tabs, and the name of the .wav output process, and evidently a bunch of other things. FX can be both the manipulation of a streaming (live-audio) signal and the capture of that signal. These are mainly challenges for entry level users, I guess, since I understand better now how they are operationally similar.

So, could more distinctive terms could be introduced, even if was just ‘code masking tape’ over original labels? Eg;

with_live :capture,buffer: newsample(coolname,8) do
live_audio :foo
end

Also, Right now synth :sound_in is used as a synth,with_fx :record is used as an FX, and live_audio is another thing; so they could be elevated to a distinct class of instructions.

with_live, or with_audio do
whatever you wanted to
end

Barring making up new language, it would also make sense to me to streamline the initiation of the buffer, which is still fuzzy to me (my vague question above). Given that with_fx :record will always demand the creation of the buffer, can’t its creation just happen by default, and simply lift the name from the FX parameters? Like:

with_fx :record, bname=“coolsample”, 8 do
live_audio :foo
end

But overall, the ability to interact with and manipulate audio is awesome, so it’s worth the work.
Thanks again for the help! Cheers

1 Like