DAWs vs Coding Music

Tweet

I am writing this post in response to this tweet I received from @rational_is_me

[Firstly the DAWs I have used are limited to Pro Tools, Cakewalk Sonar, and Audacity. Most of this post will be in reference to how I have used the first 2]

MUSICAL NOTATION

To me, Sonic Pi code is to music/production, what sheet music is to music. But better. If you were a classical music composer you would have pre-printed papers of staff/stave and you would compose a piece of music using a piano, transcribing your musical ideas onto the staff using certain lines to denote pitch, certain symbols to denote time and bars to denote loops. With Sonic Pi - instead of symbols and lines representing time and pitch - functions are loops and numbers represent time, pitch and a whole host of other parameters like EQ, phase, mix, room reverb etc. In this way, the code is like a more complex form of musical notation, but (ironically) a kind of musical notation that is actually easier to understand and which you can execute and hear in real time. This allows for a lot of musical ā€˜doodlingā€™ on a level which I would have to bend backwards to achieve in a DAW.

MUSIC AND MATH

So much of music is math (rhythms, chord intervals, scale intervals) and this kind of notation also taps into my (limited) understanding of arithmetic and ratios that relate to music. To be able to manipulate pitch and rhythm as MIDI numbers and sleep times (even FX parameters as numbers) is very liberating. It becomes easier (and faster) to identify music patterns and production techniques. It also helps advance my own understanding of music theory.

COMPOSING MUSIC

Learning Music theory or learning to play an instrument or learning a DAW, all have a pretty steep learning curve. Sonic Pi doesnā€™t. You can create interesting musical arrangments within one week of learning the programme. It allows you to explore concepts of music theory and production that would seem impenetrable with any other DAW/instrument, and, in this way, encourages an appetite for advancing music knowledge. Learning to play guitar or piano involves some basic music theory and a lot of exercises training your motor skills. A combination of these motor skills and theory helps to compose and generate new music with most of that music (or all of it!) never notated, just a riff that might exist in oneā€™s head as muscle memory. With a combination of coding skills and theory, using Sonic Pi is like generating music through notation itself, and encourages a level of experimentation with rhythms and harmonies, where youā€™d have to be fairly advanced to do so using any other music instrument, music program or music tool. With sonic pi, It is easier and faster to develop intuition as a composer/producer.

LIVE LOOPS

I got into music production because of the album ā€˜Endtroducingā€¦ā€™ by DJ Shadow which is music constructed solely from samples. The idea of manipulating samples and loops is what a lot of hip-hop/electronic/pop music production is based on. In DAWs (for me), looping forms a part of the editing process and is treated as a more static element in the track. I will copy-paste/duplicate a region to loop it. Same with automating certain parameters (e.g, sliding the cutoff, or bypassing, i.e, switching the mix from 1 to 0). In a DAW, automating parameters usually involves a lot of processing power (this is true for pro tools). With Sonic Pi this is not the case. Using the live_loop not only simplifies manipulation of loops, it allows for a lot more fluid, complex and interesting manipulation (e.g, randomisation, Sam Aaronā€™s probablistic sequencer), where the amount you can morph the loop seems to extend to infinity.

So why not just use a drum machine/step sequencer? (Was a big user of Hydrogen, once upon a timeā€¦) They do allow you to create looped patterns where each pattern can be modified, even on the fly. This is interesting because, for the fact that sonic pi reads the code in a buffer line by line, I feel the music tends to not sound too machine-like. If you have two loops (containing a beat sequence) playing simultaneously in hydrogen (or pro tools) it sounds very much like a machine, because everything is exactly on the beat. As far I have understood, Sonic Pi has a self correcting mechanism for time exceptions, which make it sound a little more organic than a normal drum machine (on this point I could be wrong, because this I have gauged purely by ear).

WHAT ELSE?

Itā€™s portable and shareable!

This is as much as I can think of for now. If anyone has any questions or would like to add (or subtract) to this or if you have a different take on this, i would love to hear it!

9 Likes

When I look at a DAW (even the simplest ones), all I can see is a huge number of buttons and sliders and
meters etcā€¦ I canā€™t relate to themā€¦ I canā€™t say ā€˜if I turn that one, the sound will change like thisā€™ā€¦ they make
me shy away from making musicā€¦

When I look at Sonic Pi, I see a blank page, that I can write sounds onā€¦ if I want a sound to change, I can
do that, and see the results. Iā€™ve got to the stage now where I can thinkā€¦ 'Hmmā€¦ if I ā€˜use_translate +12ā€™ on
that soundā€¦ it will change like thisā€¦ without having to do itā€¦

And that draws me onwards, towards making music.

Eliā€¦

3 Likes

daws serve a lot of different parts of the music-making process, and a lot of them assume a lot of background in either music theory, sound production, or signal processing in order to make sense of the interface (if you do have the requisite background, then youā€™ll look around and find all the familiar things that you would expect, conveniently placed where you would expect. you expect it because a lot of them build off of standard analog hardware equipment that people have been using for a long time)

sonic pi i think fills a role closer to daws that can be used like an instrument/composition tool, so like ableton or renoise (timing mechanism more like ableton, structural way of making sounds happen possibly a little closer to renoise). i think part of the reason itā€™s fun and can appeal to people without musical background is that it doesnā€™t assume as much, so you can discover the things for yourself. but it can be used differently if you do happen to have that background. eventually, people without the background will converge on about the same principles that come with musical training, but by way of their own process of discovery rather than by way of learning a lot of accumulated rules, some of which might not really be as applicable in a contemporary setting. but an outcome of using sonic pi or other livecoding environments is that you might come to appreciate why daws are set up the way they are, and rather than looking confusing, realizing that they incorporate a huge amount of accumulated knowledge and experience, compacting down into a very dense format. an airplaneā€™s cockpit probably looks confusing too, but once you learn why everything is there, you probably realize that itā€™s not a confusing mess, but a masterclass in concision, information density and packing of only whatā€™s necessary (when a lot of stuff is necessary).

hardware is expensive and time-consuming, so things which were either built in the era of hardware, emulate it, or build off of lessons learned from it, tend to have a lot of hard-fought in-built knowledge. but they also may incorporate constraints which no longer exist once youā€™re working in software, so some of the music live-coding environments serve to remove all the assumptions such that you can rediscover the ones that make sense, and discard the ones that donā€™t

3 Likes

Great thread!

The answer Iā€™ve given to this before is that in a DAW you are limited to navigating menus and buttons. This means that all the stuff you use frequently (play, stop, record etc.) is there at hand but if you have something more specific (ā€œI need a softer knee on the compression for the floor tom micā€) then you need to know which menus to navigate to.

With a code based interface, those menus exist in your mind. If you know the name of a param then you can type it without having to navigate through a path. This introduces a trade off between discoverability (that you get with menus and buttons) and raw power (I can access as many functions/params as I can remember as fast as I can type them).

What is ā€œraw powerā€? As an example, think about sampling a sound from an input on your soundcard (say you are working with a good guitarist or something). Letā€™s say you hit on a really good idea - some cool looping effect or something - and you want to extend that effect to the other 31 sound inputs on your fancy sound card because you have a whole orchestra of players also with their own microphones. In Sonic Pi youā€™d wrap the code in something like

32.times do |input_number|
   ... your live loop code here ...
   synth :sound_in, input: input_number
   ...
end

Obviously thatā€™s a trivial example that you could probably do in most DAWs by clicking copy channel settings etc. but itā€™s a lot of clicks! And it doesnā€™t scale - what happens if the people love your piece and you have to perform it with 256 microphones? And so onā€¦ This example is adapted from one that my lecturer at university (Scott Wilson) gave me when discussing the difference between MaxMSP (a gui) and SuperCollider (code) - his use case was the sound system called BEAST which had 6 subwoofers and 72 speakers (think 5.1 surround sound on steroids). Testing your code in a normal surround sound system, it was easy to tweak a couple of numbers and use the big system for a gig. MaxMSP users (like me) had to do a lot of clicking around! It took me another 10 years to realize what he was trying to sayā€¦

2 Likes

i think the potential of a portable, shareable, executable notation system thatā€™s appropriate to contemporary music (but flexible enough to account for more classical styles) is one of the more interesting aspects of the live-coding systems. a lot of recent music is essentially tied to the daw on which it was made. you wind up with ā€œableton-styleā€ music or ā€œrenoise-styleā€ music, which is fine, since they are essentially instruments, but it might be nice to have an ā€œFBX for music,ā€ so that people can move around a little more easily (in 3d, people used to be pretty much stuck with their choice of Maya, Max, or Softimage, or Houdini, before there were some standardization efforts that made things work a lot more smoothly)

abcjs (https://github.com/paulrosen/abcjs) is pretty interesting, particularly when you have the combined lightweight text format combined with the rendered music notation output and the audio file, all at once, like this: https://ds604.neocities.org/abcjs_goldbergVariations_02062018.html. i could imagine something like that, but incorporating OSC, and backed by alternate audio engine.



as to the relation between the node sort of navigation like max/msp and text navigation: i had some minimal experience with max/msp while in school, but my main experience with the node-based way of working is more from vfx programs like Nuke and Houdini. iā€™m not exactly sure how similar they areā€¦ but fwiw, hereā€™s my experience working in the two paradigms:

i initially used maya a lot, writing MEL scripts, before realizing that what i was trying to do could be accomplished more flexibly in Houdini by wiring nodes together, combined with some, but much less, scripting. what iā€™ve come to appreciate much more recently is that the node graphs in Houdini essentially amount to lisp programs, just presented in a format other than text, with the wiring and rewiring of things amounting to the structural editing ā€œbarfing out parenthesesā€ or whatever it is, in parinfer. scripting still does arise when an imperative structure makes things easier or more straightforward (loops can be added within the graph, but are a little cumbersome). but the important part as far as usage is concerned, is that any given node, or grouping of nodes, can be viewed as either a packaged up interface with parameters (so the gui representation), or as the piece of code that produces it, which you can modify, and the changes get reflected in the interface. the nodes can be dragged around on the canvas. the experience then becomes something like, you build yourself an instrument, either by writing code or wiring nodes, and then you play the instrument. you build an fx rig that gives you the effect that you want, and then you modify parameters and animate them to give the specific outcome called for in the shots that youā€™re working on

what you gain from being able to use the spatial channel is domain modeling capability. rather than the gui forcing on you a preconceived idea of how things are related, or otherwise leaving you to imagine the relations between things which are similarly named but spatially separated, you build up your program, and then place things that are related next to each other, by dragging them next to each other on the canvas. this reduces, for example, the need to carefully name things in a way that shows that theyā€™re related, because you see that theyā€™re related by being near each other. also, if something is less important, or you donā€™t need to be concerned with it, you just make it smaller or drag it off to the side, just like you might in illustrator. it still executes just the same, but the interface lets you express additional information by size, positioning, or changing the color or background of what are presented as nodes, but could just as well be chunks of text (as it is now, the text is usually in a parameter pane rather than directly on the canvas). in working with text files, i sometimes find it annoying that everything is the same size, because what i want to focus on is sometimes scattered in a few different places, buried in less important code thatā€™s just to set things up, and is not yet stabilized so that i would want to commit to abstracting it out and naming it

the way this might make sense for sonic pi is that, when loops are executing simultaneously, it would seem to make sense to put them side-by-side (like the side-by-side renoise blocks). so, draggable pieces of text that can be spatially positioned so that they reflect their musical outcome would reduce the need to think about whether something which is written after something else in the document actually occurs after, or is just there because thatā€™s how text is. in the example with the speakers, it might make sense to spatially position pieces of code so that they essentially ā€œlook likeā€ some aspect of the speaker setup (which is what often tends to happen for example when you rig a character: the different portions of the rig are placed on the canvas so that they reflect which part of the rig theyā€™re operating on. the outcome is that itā€™s obvious whatā€™s going on, so thereā€™s a lot less documentation when you need to hand it off to someone else)

iā€™m not quite sure how this part corresponds to max/msp, but since houdini is designed for fx shots, with changing geometry and topology, and incorporating assets which get updated all the time, a change in the physical arrangement would be accomplished by piping a different piece of geometry through the same rig. so i guess that equates to wrapping the block of code in the repeat block. while changing physical geometry tends to be quite challenging in CAD packages which are built around the notion of static entities (duplication and copy-paste enter the picture, making it difficult to keep track and make further changes), the requirements of animation, and fx work in particular, mean that the setup for Nuke and Houdini are pretty different from the setup for say, Photoshop and maybe SolidWorks or Rhino. the focus is on the the transformations rather than the outcome, so mapping over a different domain (e.g. an artist added a bunch more vertices to add detail to a model; add a few more mountains; add more swirlies to the explosion; more lensflare, more cowbell) may bring up new special cases that have to be dealt with, but otherwise doesnā€™t change the content of the computation

i might have gone flying off topic hereā€¦ but i guess the point is that in the type of setup that is present for some of the vfx programs, they seem to maintain a lot of the straightforwardness of text (for all the fanciness of the program, the scene files are pretty much just chunks of text, just with some additional positional and color information), but the way it is presented in the interface gives a level of interpretability and informativeness that you usually get from a gui. but it is, in my experience, effective, so that may present a model of a way to minimally bridge the traditional divide between text and gui ways of working, and get a best-of-both-worlds solution

1 Like

Why limit yourself to one or the other. I still use my DAW for a few things:

  1. Preparing samples (Adding nice effects to them).
  2. Mixing (Because I like using surround sound effects and binaural stuff).
  3. Mastering my songs to get a good end product.
  4. Also since I have some hardware synths I like my DAW because I can record the synths and do multiple takes very fast to get things sounding just right.

What Iā€™ve found is that even though Iā€™m very proficient at using my DAW, I have a hard time arranging tracks in it even though Iā€™ve been using it for 7 or so years. On the other hand what little coding knowledge I have is already enough for my to arrange songs 10x faster in Sonic Pi.

So the way I see it my DAW has become useless to me for arranging but is still useful for the other things I mentioned. You should definitely keep a DAW around at the very least if you plan on Mastering your own tracks. Even something free like Garage Band is more than capable of creating a great sounding mastered track.

5 Likes