I’m working on an application which takes a microphone input and sends output to headphones, as well as to a virtual audio cable to Skype/Zoom etc. On OSX this works at a system level via multi-output devices. On Windows I’ve been using ASIO4ALL to get this behavior, treating devices as “channels” under the ASIO4ALL driver. Is it possible to get this behavior using the WASAPI driver somehow? I am able to compromise on latency for the audio cable bit, so one solution I’ve considered is to treat the headphones as the primary device, then save that output into a circular buffer for the audio cable to read from in its own callback. Has anyone done something like this and had any luck?
Yes, you can. You’ll need to give careful consideration about how to handle clock drift; each audio device will have its own slightly different sample rate.
Thanks for the tip on that! That’s wouldn’t have thought of that out of the box but that makes sense. Are there common strategies for this, or does everyone just do it a little differently?
You would need to use fractional sample rate conversion as part of your buffering scheme.
I’ve found that the difficulty lies in measuring the actual sample rate ratio between physical devices; the callback timing can be very jittery.
For anyone with this same problem later, the best solution I found was to use/abuse the
dsp::DelayLine class to save samples and send them to the second device where latency is not as important. For my application, clocks were close-enough and I didn’t end up requiring fractional resampling/clock estimation, but it also looks like
DelayLine supports fractional-sample interpolation, so something like that could be supported as well if needed.