Some questions on BufferingAudioSource


#1

Hi jules,
how do you/others ( ot your knowledge ) use Bufferingaudiosource ? Do you use or reccomend it for - for example replaying sliced stuff - for example rex drum loops ?

I have a slice player engine that used to just use a loop ( say rex or apple loop ) loaded entirely into RAM , but would now like to try using a streamed approach. What i do is use two instances of BufferingAudioSource - so slice1 uses BufferingAudioSource1, slice2 uses BufferingAudioSource2, slice3 uses BufferingAudioSource1, slice4 uses BufferingAudioSource2 and so on.
To give you an idea of how i use this class in my slice player, on start of playback, i call prepareToPlay() on the FIRST slice in the loop, and then on the first time getNextAudioBlock() gets called on the first slice i invoke a thread to kick off prepareToPlay() on the next slice in the list and so on.

I have been playing round with a hacked version of this class, because i (for obvious reasons) theres a heavy delay incurred whenever prepareToPlay() needs to be called which i;d like to reduce as much as possible.

It seems to be the case that whenever one calls setNextReadPosition() to change the playback point one needs to call prepareToPlay() again straight after - in order that the buffers get loaded with the “leading edge” of the audio. Is this correct ? i find whenever i just call setNextReadPosition() without the prepare i dont get correct playback.

All this makes sense but i’ve been exploring different scenarios with my “hacked” version - for example in some situations - in order to avoid delay when calling prepareToPlay() i’ve tried removing the “while” loop as well as reducing the percentage of buffer to check for. This works for some situations, but isnt enough.

For example i’m now working on multi-channel sliced data, where for example the kick is on one channel, hi hat on another and so on. This means i need to call Prepare() at the start on not just one slice but -say four - if i have four tracks ( say hi hat, kick, snare, ride ). And this incurs quite a delay up front. ive experimented with reducing the amount of buffer “fill” checked for - ie

while (bufferValidEnd - bufferValidStart < jmin (((int) sampleRate_) / 4, buffer.getNumSamples() / 4)) // NOTICE THE /4 here instead of /2

However, im now wondering if i can save time by ( for this special case ) bypassing the thread stuff altogether and calling readNextBufferChunk() directly until i get enough data in the buffers. for example i’d like to be able to do this :

while (bufferValidEnd - bufferValidStart < jmin (((int) sampleRate_) / 4,
                                                 buffer.getNumSamples() / 4))
{
    readNextBufferChunk();
} 

which MAYBE - might reduce the delays i get.

Trouble is, readNextBufferChunk() is currently private ( except to its friend SharedBufferingAudioSourceThread of course )

Any thoughts ?

what would you think about making readNextBufferChunk() just protected, and also adding a parameter to prepareToPlay() to specify how much of the buffer needs to load before we leave the “blocking” incurred by the wait loop ?

Do you think i’m mad to use BufferingAudioSource() for this scenario ? i’m just trying to keep the amount of loop data i load into RAM to a minimum for obvious reasons.


#2

I wouldn’t really recommend it for non-continuous reads…

A better approach for something like that would be similar to the way I did the buffering in tracktion. Instead of a single circular buffer, you have a set of blocks, each with a timestamp. The buffering thread then looks at what sections of the file might be needed next - which isn’t necessarily just the next section of the audio, it might jump around. It then takes the least-recently-used block and refills it. And the readers just look through the blocks to find ones that contain the data they need. This is a bit more complicated, but lets you give the buffering thread much smarter logic and works well for looped sections, etc.


#3

would each block in this scheme contain the audio for the WHOLE “slice” or just the leading edge ( and then streaming gets the rest ) ?


#4

It’d be the whole thing - I just mean that rather than using a circular buffer, which can only contain data from a single contiguous section, it’d be able to buffer data that’s scattered around the file.


#5

was this technique used for your Tracktion loop/slice player or for actual playback of your Audio Tracks ? ( or both ? )

Also, if for track playback ( presumably blocks of “regions” as in pro tools etc… ) , were these blocks fixed size ? of what sort of frame size - around 512 frames or 8188 ?


#6

Can’t remember the details, but I think it used one cache for everything. As for block size, you’d want it to be fairly big so you don’t have too many of them to search through - maybe a quarter or half a second per block.


#7