Hi all, I would like to get recommendations on how to proceed in my application:
I am making a desktop application and basically what I need is to handle two different Audio Device Types, where one device is in charge of capturing or getting the audio (In this case it is a virtual cable) and the other device plays what the first one is getting.
I created two AudioDeviceManager but when I want to process this with getNextAudioBlock() it only lets me use one device.
Please omit all the sync stuff, I will fix that later…
Thank you very much.
Thank you very much for your reply!
Yes, in fact that’s what I’m trying to do and I also thought of using AudioIODeviceCallback but I thought this method only works if you want to play a previously recorded audio but not in real time. You have made it much clearer for me, thank you very much.
Could you please tell me how to use the AudioProcessorPlayer ?
If you are on MacOS: have you heard of aggregated audio devices? It’s built into the OS and of you are not looking for a commercial product but just a tool or something for either you or your team that might be the best option. It is a pain to setup (if you compare it to Apples usual eye for detail) and doesn’t offer a hole lot of configuration. But since you are writing the application that actually uses the audio device, you should be able to make good use of it.
Thank you very much for the recommendation! But unfortunately I am using windows.
Anyway, I hope someone else can benefit from your advice.
Then I agree with the poster above, the
AudioSource library is probably your best shot. You can implement a class to be an AudioSource and an AudioDeviceCallback.
You should consider your playback device (and it’s
AudioIODeviceCallback) to be the main audio thread. Your custom input source AudioSource should be considered just to be like any other thread pushing audio data into the pipeline. If you application is small and your bufferSize is >~128 you might even get away with a CriticalSection as thread safety tool – so this might actually be very easy.
I’d implement your AudioSource to rotate audio data with a fixed size AbstractFifo (e.g. 8192 samples) and overwrite everything that didn’t get picked up by your audio thread. With the AbstractFifo you could just get rid of your CiriticalSection all together. If the
numReadyForRead becomes zero, just flush out 0 values, so your hole playback eventually plays silence if e.g. the read audio device is disabled.
You can use the ResamplingAudioSource if that should be necessary. Take note, your AbstractFifo size should orientate on the size of your source sample rate, since that determines the number of samples you need to consume per actual callback from the audio thread.
I’m sorry to bump into this. How did it go?
Hi! I’m sorry to disappoint you, but I had to stop the project for a while, so I didn’t make much progress.
Anyway, if you manage to do it or something like it, I would be very grateful if you could share it with me and resume it as soon as I can.