Offline rendering host for AudioProcessorGraph?


#1

Does one already exist in Juce? I’m using an AudioProcessorPlayer for realtime rendering at the moment, but I don’t see any way to use this in a nonrealtime mode. Calling setNonrealtime on the graph doesn’t seem to do anything in this context.

Assuming I have to roll my own, I guess looking at the AudioProcessorPlayer code is a good place to start. Apart from appropriate calls to ‘releaseResources’ and ‘prepareToPlay’, should allocating an AudioBuffer with the greatest number of channels used by any processor in the graph and then calling ‘processBlock(…)’ on the graph do the trick for the main chunk of work?


#2

I think think the graph itself shouldn’t behave different in realtime or non-realtime. It is the individual AudioProcessors, that sometimes need to change behaviour.

The biggest difference: in realtime a processor would rather send zeros than blocking, if it comes to worst, vs. in non realtime it has to block, otherwise you will get only a few samples with a lot of zeros in between.

In a bespoke mini-host I used BufferedAudioSource to spread the reading and processing load between threads. For that I had to add BufferingAudioSource:: waitForNextAudioBlockReady(), which luckily made it into juce.

Another thing to mention, if you have automations: since juce only updates processor parameters on new processBlock calls only, it is advisable to stick to small buffer sizes, no matter how intriguing it is to save CPU with longer buffers.


#3

Super useful info. Thank you very much!

Would be nice if that waitForNextAudioBlockReady was exposed on the TransportSource :slight_smile:


#4

Is there any equivalent blocking write call for ThreadedWriters ?

I wonder how much of a performance hit I will take if I just do my file IO on the same thread as my processing… will it be massive? (I’m writing an iOS/Android app)


#5

Not sure if I understand, but the writer only writes if data is available.

There are two types of pipelines, pushing or pulling. For playback juce uses a pulling pipeline, i.e. the audio device (or your renderer) is pulling samples from the graph, AudioSources and AudioProcessors.

The ThreadedWriter is meant for pushing pipelines, e.g. you have audio occurring that you want to write out, then you can feed it into the ThreadedWriter.

That is also the reason why it wouldn’t make sense to block the ThreadedWriter. If you want to block, use a normal AudioFormatWriter.


#6

I know writing will only happen when there is data available. My problem is my calls to write are happening too quickly, the buffers I think are getting overwritten before they get written to disk, and I get dropped data as a result. My 1:45min audio is now 1:30min. I was hoping for a blocking write call that would block if the buffer was full and more time was needed to perform disk output.

The problem doesn’t exist if I don’t use multi-threading (i.e. just use a straight AudioFormatWriter, which is what I chose to go with as a result). I was just wondering if that approach takes a performance hit, or if it doesn’t really make much difference. i.e. would using a blocking multi-threaded approach be pretty much as efficient as just doing it synchronously on the same thread, or would the multi-threaded approach be significantly faster? Thinking about multiple cores and all that jazz… sorry, I’m not a super guru on the topic for sure.