Machine learning musical patterns (research from Google)


Thought this might be of interest to some people here:

As far as I can tell, it’s various approaches from making musical material from a small collection of simpler sources. For example taking two drum beats and then “mixing” them in such a way that they keep the “spirit” of both but introduce new variations.

It’s also slightly encouraging that the ensemble version (trio videos), where the ideas are trained between a group of instruments, aren’t quite at the level that a human (with good taste) could produce (in my opinion). Bedroom DJs have nothing to fear for now at least! It’s still very impressive though when you consider that they are effective teaching models to jam with each other.

With regards to Sonic Pi it’s entirely possible that the models themselves could be ported into a Ruby object, but it would be a lot of work. If someone wants to fund Sam to lead a PhD group for a few years on this please make yourself known :slight_smile: