I’ve written code to do non-realtime rendering of an audio processor graph containing audio processors that include an AudioFormatReader buffered by a BufferedAudioSource. Realtime playback works fine and so did non-realtime while I had logging code in the loop. Without the logging code, there are dropouts in the rendered audio. Adding a Thread::sleep with a non-zero argument seems to cure the problem and decreasing the buffer size on the BufferedAudioSource instances inside the previous mentioned audio processors improves but does not entirely eliminate the problem. I’m loathe to just fix it by adding the Thread::sleep as it may not work on slower machines. Any ideas? Thanks in advance!
The BufferedAudioSource class is designed for real-time use, so it’ll deliberately drop audio if it’s being asked for data more quickly than its background thread can read it.
If you’re rendering at non-realtime speed, just don’t use a BufferedAudioSource. Even if I gave it an option to make it handle non-realtime situations, it’d still be more efficient just not to use one.
Mrblasto, did you manage to solve the problem?
I’m currently looking for a correct way to do non-realtime rendering of an audio processing graph.
I saw something in the audioProcessor class, like the setNonRealtime function.
But I don’t have any other clues…
Julian, if we don’t have to use the BufferedAudioSource, what’s the best solution?
And… by the way (I just came back!)…
Happy new year!!!
Happy new year!
The solution is to simply not use BufferedAudioSource. It’s only purpose is to smooth realtime playback, so if you’re building a playback chain for non-realtime playback, just don’t put one in.
Excellent! Thank you!
Could you give me some hints?
I’m a bit confused:
I have this audioProcessingGraph. I’d like to add a classic bouncig function.
So, I set it non realtime by calling setNonRealtime(false).
The audioRecorder class in the juce demo works with a thread, which takes the samples coming out from the circularBuffer and packs them into a wave file.
But this is done in real time.
How can I achieve it at different speeds (faster)?
Can you help me with some pseudo-code or some schematics?
Just call the graph’s process method and stick the results into an audioformatwriter.