Dropout and weird delay caused by AudioIODeviceCombiner?

Hi there,

I’m trying to track down a hard to reproduce issue: I have an audio application that’s processing live audio. I have two different audio devices selected for input and output - their clocks are not synced - but they run at the “same” sample rate.

after quite some time (1.5-2h) I hear a crackling sound and after that audio output is noticeable (0.25s or so) delayed.

I had a quick glance at the implementation of AudioIODeviceCombiner. I can imagine, that it was - let’s say - interesting to write. From my understanding, the implementation is using FIFOs of bufferSize*3+1 samples. I could not spot anything about resampling (in case the clocks drift). So I would expect, that in case of a clock drift, there’s a dropout and then processing continues without a dramatic delay.

To cause the delay I’m having, one would need a buffer of at least 10k samples. In the case I’m observing, the bufferSize is 256 and the FIFO size is 769 - that’s not even close. (And I also don’t have any delay lines in my processing)

And now to my questions:
1.) has anyone noticed a similar issue before ?
2.) is it intentional, that clock drifts are simply ignored? Shouldn’t there be some kind of resampling?
3.) any ideas on how to investigate this further?

Thanks for the input,
Ben

1 Like

The question could be : why is AudioIODeviceCombiner class needed in the first place? CoreAudio layer allows to aggregate separated devices, so this could be entirely avoided (see this thread Estimating JUCE behavior to develop a generic architecture file for Faust)

Good point.
I think the reason might be, that the aggregated devices suffered from high
latency. But I think that this was improved some time ago. I’ll need to
check how bad this really is nowadays.

You can use Apple HALLab test application yo display all in/out latencies :

1 Like