AudioSources and playing wavefiles at specific offset


i’m in the phase of starting using the AudioSources and Transport classes, cause i would like to make a sort of basic multi track for wave files.
from what i’ve understood all i need is a series of AudioFormatReaders (with the playing wave buffers) that i’ll attach to a mixer, a mixer source attached to a transport source.
now taken that i need to play a subsection of each wave, and even not from the current start of the transport:
say i have wave1 that plays 1000 samples at 0 samples offset, and wave2 that plays 3000 samples at 2000 samples offset (from the start of the transport).
how would i specify to a positionable audio source (or the mixer or the transport) to not play from the beginning, but to wait when the wave start event offset is reached ?
i’ve tried to look at AudioSources code and at AudioDemo code, but i see that is not perfectly commented as other parts of the framework (maybe cause people can reverse engineer Tracktion?)

can anybody help here ?


is it a good idea to have a AudioTransportSource for every track, and change its AudioFormatReaderSource in getNextAudioBlock every time i need to play a different wave ?
how i deal with offset positioning of the source player ?

need i write a PositionableAudioSourceWithOffset class ?


No, definitely not! The transport/positionable sources are there to provide a single stage that you use to set the overall playback position. There should only be one of these.

If you’re trying to do a sequencer you need an extra type of source that would play sequential sections of files - you’d have one of those per track, feed each one through filters to perform that track’s effects, send all the outputs into a mixer, then put the mixer through a transport source.


ok Jules, thanx… i have 1234134 ideas on how to do this, but the only one correct is the one that you mention, aka creating a new AudioSource that takes a list of AudioSubsectionReaders along with a offset each, so when it is ask to play, it will play each audio buffer when it knows… thanx !

another question, the audio demo do something like this:

IODevice <- SourcePlayer <- Mixer <- Transport <- FormatReaderSource
                             |    <- SynthSource

what if i do something like this ?

IODevice <- SourcePlayer <- Transport <- Mixer <- FormatReaderSource
                                          |    <- FormatReaderSource
                                          |    <- FormatReaderSource
                                          |    <- FormatReaderSource


Yeah, that’s fine, of course. In the demo the mixer is used to mix the transport player with the live synth, but if you don’t need to do that, you can get rid of it.


ok i understood that. but will change something if i plug the mixer after the transport, and not the transport to the mixer ? or just the mixer doesn’t take care of Positioning ? (i mean, will the PositionableAudioSource not attached directly to the transport behaves normally ?)


mmmh it seems that i can’t attach a mixer to a transport… true the opposite… so i shouldn’t have my tracks attached to the transport, but instead on the mixer directly ? i don’t reach the point here


Ah yes, it’d need to be a positionable mixer… That’d be an easy extension to the mixer class.


ok thanx man you rock !


yeah i have done a PositionableMixerAudioSource, pretty easy to make it work. but then it doesn’t take care of resampling like the transport source do…

so what i need is a PositionableTransportWithMixerAudioSource class !

whau !!! (one class to rule them all)



i think that PositionableAudioSource should have “setLooping” method as well as the isLooping. this way if you chain sources

PositionableMixerSource <- PositionableResamplingSource <- AudioFormatReaderSource

you’ll still be able to tell the resampling source to set looping its input reader, without needing to get the pointer out and dynamic_cast to some class that have setLooping implemented… imho

ps. anyway i’m producing a set of cool classes that extends the audio_sources and made them all positionable, so you can basically plug them togheter without any restrictions and have them controlled by the transport itself ! could be a good addition to the framework !


excellent! did you finish your classes ?
I’m looking for a positionableaudiosource that can play in reverse :slight_smile:


kraken roks!


i’ve done 3 audio sources in total, from that time. the ones that i needed most. the PositionableMixerAudioSource (needed if you want to plug it into a transport class), the PositionableResamplingAudioSource (using the internal juce iir that could not be perfect when changing playback position, anyway it works) and the SequenceAudioSource, which acts like the MidiMessageSequence, but uses audio events instead of midi messages (a sort of basic single track sequencer) using of the PositionableResamplingAudioSource…

if you want to give them an eye, they are available in the JOST source tree, besides the other juce audio_sources…

anyway when i’ll come back from holydays, i’ll give a major push to jost, implementing the audio sequencer (but also improving the mixer and the midi matrix editor), and so i will code all sort of positionable audio source i can think of…


Cool, I didn’t know your Joost project.
I will try to code a PositionableResamplingAudioSource that can also play a file in reverse and share my code.