Open source beat matching crossfader

Hopefully someone out there will find this useful or cool.

I hacked together a quick application that plays a couple mp3s, with a crossfader - and which downloads (from echonest) the beat information for the song (position of each and every beat … booyah!), then beat matches (using granular resynthesis to timestretch … no pitch shifting).

The best explanation and demo a youtube video I put together is here:

A quick list of the libraries used in this project:
Echonest – to retrieve beat information
jsonCpp – for reading Json data from echonest
SoundTouch – for time stretching the audio
mpg123 - for decoding mp3s
Juce – for everything else - duh!

  • all libraries are GPL or Apache license, all are fully included except juce
  • all libraries have the implementation included and can be run without compilation except mpg123

git source is here:

A more in depth explanation is here:

Excellent stuff, well done!

The video demo is quite impressive ! Good job !

Awesome work bro!

Wowza, hotness. Very impressed.

Tiny technical quibble with the code - you seem to mix spaces and tabs in your indentation and it comes out as confusing on github (and probably locally unless people have the same tabstops). (You also have a huge # of public variables in your classes… scary, kids! :smiley: But you can’t argue with the results… )

I should add that my juce mpg123 interface there isn’t actually GPL or Apache but my own personal IDC license (Summary: “I Don’t Care what you do with it as long as you don’t bother me!” :-D)

Very nice stuff!

I note you made a comment about interleaved vs. parallel audio streams in the code (implying you preferred interleaved). My experiments in this direction apparently agree with you, seeming to indicate that interleaved streams are 10-15% faster to read, seemingly half due to better use of cache and half due to the fact that the compiler can unroll the inner loop if the number of channels is a constant - which I ensure by having this a function templated on the number of channels, so I get separate implementations for each possible channel count (that being 1 and 2 right now :-D).

Now, if you do any serious processing, this 10-15% becomes fairly insubstantial, but I still wish that Juce were interleaved, it would work better with pretty well all my code…

Thanks all - great to have a nice reception.

Regarding tab/spaces - good feedback, I hadn’t really though of that. Though I guess I did say - it is a hack … other quick improvements that I had thought of were: moving the thread that reads the echonest data to a non-blocking background thread, and swapping out the home grown json reader for the new juce one. And removing the public variables … yeah.

But hey - sometimes expedience trumps thoroughness for me. This was started (though not finished) for a music hackday (a really cool event that I recommend to anyone - they happen yearly in SF, NY, and Berlin, and maybe other places). Basically a “you have 24 hours to construct a hack using one of the new techs presented here for cash prizes” event.

With regards to interleaved data … interesting to note that it’s that much faster; which makes sense as most of the heavy dsp libs out there (FFTW, accelerate, and soundtouch - which does the granular resynthesis here) use interleaved data. I don’t so much wish that all juce buffers were interleaved - but I have thought sometimes it would be useful to have the option somehow, since translating them back and forth each time (as I do now) is certainly not the fastest. I could cache it I suppose, but that has it’s own problems.

With regards to licensing … I’m glad you chimed in. I had realized that I still needed to drop you a line separately and make sure it was openish code that I swiped … but had kindof assumed it was. I personally use the “do whatever with my code cause I clearly dont care a whit” style of license as well.

Hmmm... noticed a couple of recent spam messages on the forum recently. Hopefully the new forum will help sort this issue out..