Sending audio at an exact time to an audio device

I generated tones and stored them in an array of AudioSampleBuffer pointers. I send these to an ASIO audio device at a regular interval. When I record these generated tones on a DAW, I realise that the tones are fairly sent at the exact expected time. However, now and then, there is an audio drift equal to the audio buffer size of the audio device (512 samples, equal to 10.667ms at 48kHz).

Is there a way to transmit the tones at the exact expected time in JUCE? Or if there’s a way of compensating/correcting the drift?

If you reveal a bit more how you do this today, your question might be easier to answer.

I use a do/while loop. Inside it, i call deviceManager’s playsound with a particular tone created earlier. And then wait a specific time.

do {
   deviceManager.playSound(sounds[0], false);
   // other code
   wait(192);
} while (true);

So I expect the tone to be played every 192ms, but now and then it’s played at 202.667ms (192 + 10.10.667) delay from the previous tone.

If you want to send the tones with sample accuracy you’ll have to send them from inside the audio thread. Something like this (also see the noise plugin tutorial).

In short you stuff the output buffer with zeroes before its time to play, then fill the buffer with the tone or part of it if it’s to long to fit in the buffer (which it probably is), and then start all over again with the next tone (or the same)

//member variables to be intiated at start
int numLeftOfMySample = 0;
AudioSampleBuffer myTone;	
int intervalBetweenNotes = 44100 * 2;	//2 seconds interval for samplerate 44100
int startOfNextNote = intervalBetweenNotes;
int numSamplesSinceStart = 0;

void getNextAudioBlock(const AudioSourceChannelInfo& bufferToFill)
{
int blockSize = bufferToFill.numSamples;

for (int channel = 0; channel < bufferToFill.buffer->getNumChannels(); ++channel)
{
    float* const buffer = bufferToFill.buffer->getWritePointer(channel, bufferToFill.startSample);
    const float *mySoundBuffer = myTone.getReadPointer(0);
    int samplesUsed = 0;

    if (numLeftOfMySample > 0)	//play the rest of mysound from prev buffer
    {
        int numSamplesUsedOfMySound = jmin(blockSize, numLeftOfMySample);
       FloatVectorOperations::copy(buffer, mySoundBuffer + myTone.getNumSamples() - numLeftOfMySample, numSamplesUsedOfMySound);
       numLeftOfMySample -= numSamplesUsedOfMySound;
       samplesUsed = numSamplesUsedOfMySound;
    }
    else if (numSamplesSinceStart + blockSize < startOfNextNote)
    {
        bufferToFill.clearActiveBufferRegion();	//not time for my sound yet; clear the buffer
        break;
    }
    else //time to start play our sound...
    {
        int numZeroSamples = startOfNextNote - numSamplesSinceStart;	//... but not really yet, clear the buffer a little bit more..

        FloatVectorOperations::clear(buffer, numZeroSamples);	
		
        int numSamplesUsedOfMySound = jmin(blockSize - numZeroSamples, myTone.getNumSamples());
        FloatVectorOperations::copy(buffer + numZeroSamples, mySoundBuffer, numSamplesUsedOfMySound);
        numLeftOfMySample = myTone.getNumSamples() - numSamplesUsedOfMySound;

        //might have to clear the end of buffer if our saound is samller than the buffer size 
        samplesUsed = numZeroSamples + numSamplesUsedOfMySound;
    }

    int leftToClear = blockSize - samplesUsed;

    if (leftToClear > 0)
        FloatVectorOperations::clear(buffer + samplesUsed, leftToClear);

    if (numLeftOfMySample <= 0)
        startOfNextNote += intervalBetweenNotes - blockSize;
}

numSamplesSinceStart += blockSize;

}

thanks. I’ll implement this way and see what happens.