How much time can we spend in AudioIODeviceCallback::audioDeviceIOCallbackWithContext()?

How much time can we spend in the callback with context function?, is it (number of samples in each block/ sample rate)seconds ? and another question what can be the maximum and minimum values of a sample in outputChannelData such that it does not clip while playing ?

bump.

As little as possible! The duration of the audio callback (buffersize/samplerate) is a hard upper limit, but your system will need to do other work besides calling your audio callback.

Generally, between -1.0f and 1.0f.

Thanks for the reply, so if we want to process an AudioProcessorGraph for this block we should do it outside and just copy the results into the outputChannelData, when this method is called ?(my graph does not require a inputChannelData), and after the callback we should start processing the graph immediately for the next callback ?, I thought (for 512 samples per block and 44100 sample rate) we have like 11.6ms to for this method which will be then played back for the next 11.6 seconds after calling the callback for the next block, is this wrong ? (this may seem like the same question but I am asking because every answer is like “as little as possible”, but relative to what is it ‘little’!? because if you need to process inputChannelData you cannot do anything before even knowing what it is. I would really appreciate an answer to this, I couldn’t find one in the docs)

I think in case of the AudioProcessorGraph it would be best to call it directly.
Offloading the processing only makes sense, if you can parallelise it, which the APG is not designed to do.

Computing the APG on a separate thread doesn’t speed it up. On the contrary you have the risk that this thread doesn’t get the same priority as the audio thread and you have waiting times for synchronising those two threads.

In theory yes, in practice that’s not quite how it works. However, that is at least a good absolute upper limit.

The actual amount of time you have to process at any one point can vary more significantly than I think most people realise. Some of the factors that play into determining this include


  • The audio device
  • The audio device settings
  • The driver and driver type (CoreAudio / ASIO / WASAPI / etc.)
  • The OS
  • The application delivering the audio
  • Other applications running on the OS
  • Other hardware associated with the device running the application

Even if your audio device is calling back asking for 512 samples every time, if you were to look at the time stamp that those callbacks arrive you may well find that they are not regular. This means sometimes, on one particular callback, you may have more or less time to process than you might otherwise expect. So unfortunately there is no clear one answer that anyone can give.

For a simple example of this imagine a driver delivers audio to a device in chunks of 1024 samples at a time, but your app has requested a buffer size of 512 samples. What might happen is that your app will first be asked for the first 512 samples, then as soon as you’ve delivered those, regardless of how long it took, your app will be asked for the next 512 samples. If the combination of those two callbacks takes longer than 23.2ms (assuming a 44.1kHz sample rate) then you’ve missed your deadline and the drover will have to deliver something to the audio device, resulting in an audible drop out. However, if one of the callbacks took 15ms (longer than the 11.6ms you would expect for 512 samples) and the other 5ms, then it may well pass without an audible drop out.

Now imagine the same scenario but the driver is delivering audio in chunks of 768 samples to the device. Again we will get two callbacks requesting 512 samples on each call but now we only have 17.4ms for both callbacks to complete rather than 22.3ms!?

These examples aren’t entirely realistic but I hope they demonstrate the basic problem in trying to answer the question. In general the answer is it depends. The less time you take the less risk there is. If you exceed the upper limit mentioned (11.6ms in your example) you’ll very likely cause an audible dropout at some point. If your average time exceeds that upper limit then you’ll definitely cause a drop out at some point.

If you’re interested in the nitty gritty details I recall the talk linked below very informative but it will only help you understand the layers of complexity between audioDeviceIOCallback and the actual audio device. I don’t think it will help you get any closer to a more meaningful answer than has already been given.

Some other sources I can think of that are worth watching / reading
http://www.rossbencina.com/code/real-time-audio-programming-101-time-waits-for-nothing

Hope that helps.

3 Likes

Thank you very much for the reply, I was in under a blind assumption that the audio device will always get adjusted for our sample rate but it’s the exact opposite, we need to follow what it can support, will check out the resources!