Using ResamplingAudioSource


I’m trying to play back an audiofile using a timestretching algorithm that i have, and there’s something i’m just not getting. I’m able to play the file fine without the stretching based on the approach in the demos: set up an AudioFormatReaderSource for the file, plug that source into an AudioTransportSource, plug that into an AudioSourcePlayer which is the callback recipient for an AudioDeviceManager. Then call start() on the AudioTransportSource (which it seems just sets the state of the transport to playing and not stopped, which lets the player know it can to go ahead and ask for data in response to the endless nagging of the AudioDeviceMgr for more food, er… sample blocks).

So all that is basically working. Now I actually want to do something with the data instead of just passing it through.

Since I’m time-stretching, it looks to me like i need to use a ResamplingAudioSource - after all, my process will be producing output buffers that are larger, generally, than the input buffers. It’s just not clear to me how I hook all this up together. If I missed something illustrating this in the samples, please forgive and point me to the right place.

Common sense says that i should make the AudioFormatReader the source for the ResamplingSource, and make the resampling source the source for the Transport Source, and all will be well. The first step is easy - just pass the AudioFormatReader in the constructor for the resampling source. but the next step is not so easy - the TransportSource needs a PositionableAudioSource, and the ResamplingSource is not one of those.

So, the first question: am I barking up the right tree, at least? Should I be approaching this from the point of view of creating AudioProcessors instead of using the ResamplingSource? I am not looking to make (at this point) a plug-in of any kind.

And the second question - if using the Resampling Source is a good approach, it looks like what i need to do is inherit from it and override getNextAudioBlock in order to do my processing. Should i also inherit from PositionableAudioSource and figure out how to handle the somewhat-nebulous meaning of repositioning when you’re resampling (are you repositioning in source samples or in target samples?) This would allow me to plug the ResamplingSource into the AudioTransportSource smoothly.

I’d greatly appreciate any feedback on this approach. Thanks.



hmm, yes, that is a bit tricky. The transport source does its own resampling, so you don’t normally have to do this.

But if you’re writing your own class, it’s not a problem - just start with a copy of the resampling source, change it to use your time-stretch algorithm, and also make it implement the positionablesource protocol, so it can slot straight into a transportsource.