Tips on how to handle playback of large amount of samples?

I’m working on an audio plugin that plays back a large amount of samples. I’m including about 800 wav files using BinaryData.

I’m playing back about 48 samples at the same time (overlapping voices, different mic positions being played back in parallel etc) , and for every “cluster” there’s always a new set of samples (due to velocity layers, round robins etc). I’ve got it working, but I’m hitting the DSP meter in the host DAW big-time…

My current approach is to use MemoryInputStream–>AudioFormatReader–>AudioFormatReaderSource–>AudioTransportSource. Additionally I’m using resampling in the AudioTransportSource.

This means that every time I “load” a sample (i e reconfigure a voice to play a different sample) I have to allocate three new stream objects, since you can’t for example point an existing MemoryInputStream to a different memory block… So the only object I can reuse are the AudioTransportSources in each voice.

I’m wondering if there’s a more efficient approach I can take to accomplish all this, preferably without having to allocate any heap memory during processing? Maybe there’s some API or technique that I’ve overlooked?

I think I’ll have to stream all the data into “pure” audio buffers instead, then build my own playback code using the LagrangeInterpolator to resample…

But it feels stupid to read all that data (around 300MB!) from memory just to store it somewhere else in memory. I think it would be better if I could store the pure audio buffers (as float[]?) in “BinaryData” to begin with instead of the WAV file data.

Any tips on how I can do this in a simple, efficient and safe (i e platform independent) manner?

If it’s WAV or AIFF you might want to look at the MemoryMappedAudioFormatReader stuff, that’s about the most efficient possible way to stream from disk.

Thanks, Jules! Actually I’m not that interested in streaming from disk, I would be happy enough having them all in memory. Each sample is very short, about 1 second. I just want to be able to play them back in the most CPU efficient way possible.

I ran some profiling with Instruments, but the results were inconclusive. I’m suspecting that it’s all the heap allocations (3 reader objects that have to be created every time I trigger a sample instance) that’s slowing things down, so I would like to try getting rid of those somehow.

A quick followup on the solution I’ve implemented:

On startup, I stream all samples from BinaryData into an unordered_map of SamplerSound objects, that I later look up (using the resource name as key) when I want to play them back. I adapted the playback code from SamplerVoice to work outside the context of a Synthesiser since I wanted more control over the playback.

The performance is great compared to the previous solution, the DSP meter is hardly breaking a sweat now. The drawback is that I now use about twice the memory (since the original raw wav files are stored as BinaryData), so I guess the next step is to move the wavs out to an external file and stream from that instead.