Offline Rendering?

Hey everyone,

Currently, I have a JUCE project setup that uses a juce::AudioProcessorGraph and contains VST instances… And was wondering if there is a way to do off-line rendering? And if not, are any examples of this available? Or even if someone could hint me in the right direction - I’d greatly appreciate it!

I’m running a graph manually in a background thread.

You’ll need to have an appropriately sized AudioSampleBuffer and a MidiBuffer (I don’t actually use the MidiBuffer) to supply to the graph. You need to prepare the graph something like this:

//call this once
graph->setPlayConfigDetails(2, 2, this->sampleRate, this->numberOfSamples);

//call these each time it starts
this->graph->releaseResources();
this->graph->prepareToPlay(this->sampleRate, this->numberOfSamples);

Then you can loop over processBlock in your thread.

 graph->processBlock(this->buff, this->midiBuff);

You might want to do some extra steps in the processing loop, like keeping track of time and clearing the buffer, but this is the gist of how I got a graph to run offline.

Thanks Graeme!

Just one question on the note of having an appropriately sized buffer: how would I determine what size is needed (since this has no basis on a device - being off-line)?

[quote=“jrlanglois”]Thanks Graeme!

Just one question on the note of having an appropriately sized buffer: how would I determine what size is needed (since this has no basis on a device - being off-line)?[/quote]

Try bufferSize = pow (2, N) where N = [8…12]

Thanks fellas! I finally got around to implementing some offline rendering scheme based on the information given here.

Graeme, I have a question for you: I’m currently rendering a graph offline, which contains an AudioProcessor that wraps an AudioTransportSource [based on your suggestion here], and so far, the only way I can get samples from this processor is by “starting” it. The final result (the written audio file) is a nasty, short-run, distorted version of the original AudioFormatReaderSource (audio file) associated to this “AudioTransportSource Processor”.

I haven’t the time to test other things in the graph to see if I’m getting the same result with this Offline Renderer, but have you ever faced that issue before? I’m not sure what I should be looking for to get this “AudioTransportSource Processor” working offline properly…

Edit: It seems the buffer size affects how “short-run” my result is… and using too small a value, or too large a value for the buffer size kills my app!

Just some thoughts but two spring to mind. Are you using a background thread to read the audio file in AudioTransportSource::setSource? If so, the thread might not be fetching samples quickly enough to keep up with the rendering thread which would lead to periods of silence. Try with buffer size 0 and no background thread to fetch the file’s contents on the caller’s thread.

The other thing is are you passing the buffer size on correctly all the way down the chain i.e your AudioProcessor calls prepareToPlay on the transport source from its own prepareToPlay method? That needs to be done so memory is correctly allocated all the way down to the source reader.

Just some things worth checking.

Dave’s suggestions are right. You definitely don’t want to be buffering the transport source for offline processing. You do need to call start on the transport, but it only flips some booleans so getNextAudioBlock will actually attempt to get your data. My setup has changed so I can’t post exact code but it’s pretty straight forward with these things in mind.

Also note that the prepareToPlay methods for AudioProcesser and AudioSource are backwards! That could explain the garbled sound.

Ah, I wasn’t aware that I had to remove the background thread it in its entirety before doing offline rendering (I was simply stopping the background thread) - my system definitely works better now. 8)

That is indeed what I am doing.

I’m not sure I understand exactly, but my wrapping processor’s prepareToPlay method simply calls my AudioTransportSource’s prepareToPlay method (perhaps it was assumed I was calling the transport::prepareToPlay() elsewhere?).

And so, even with fixes related to the thread, I’m not quite certain why the rendered sound file isn’t identical to the original file, even when the same specs are used for both (ie: bit-depth and sampling rate). I’ve attempted in using various buffer sizes, but still to no avail.

I can’t really post code, but maybe someone can enlighten me by listening to the generated sound file: [original] and [rendered offline] (Warning: loud, distorted (undesirably and unintentionally) and is looped to create a 5 second long file).

The 2 methods are backwards in their parameter listing. AudioProcessor is (sampleRate, bufferSize) and AudioSource is (bufferSize, sampleRate)

Perhaps I should note that the only way I was able to get connections to happen was by having the graph I pass to my offline rendering class also be set to an AudioProcessorPlayer (which has to be passed to an AudioDeviceManager). I’m not sure what else I can do to get the connections to work without the need of an AudioProcessorPlayer, and an AudioDeviceManager, so I can rule these out of my list of possible issues…

Edit: Ah right, currently my setup is using the “AudioProcessorGraph::AudioGraphIOProcessor::audioOutputNode” as the means of getting audio data…

Ah I see. In fact, I have these parameters setup/used correctly.

That could cause problems, iirc the device manager will update the graph with it’s own buffer and sample rate settings. You shouldn’t have to add them. How are you setting up the graph?

That makes sense…

The graph instance I have is essentially connecting my AudioTransport Processor to an AudioProcessorGraph::AudioGraphIOProcessor::audioOutputNode… That connection, it seems, can only happen if the graph instance is attached to an AudioProcessorPlayer, that is attached to a device manager.

Ah. Yeah, don’t connect it to the the audio output. Instead create a processor that uses an AudioFormatWriter to write to the file and place it at the end of the chain, or wherever’s appropriate for your needs.

My thoughts exactly.

Thanks for the help - much appreciated! :slight_smile:

Always happy to help out where I can. Cheers!

Any takes why the generated audio file would be distorted?

Different buffer sizes don’t seem to make any difference… so the only culprit I see is doing this:

Are you actually doing any processing in your chain at the moment? Does this work if you just render unmodified audio to the file? What exactly does you audio chain look like? Try to simplify this as much as possible so you can just copy a file from one location to another, should be possible with just a few classes.

Another thought is are you using a ThreadedWriter? If so check that the write method returns true or you may loose samples if the write thread can’t keep up with the rendering thread.

There is also AudioFormatWriter::writeFromAudioSampleBuffer which may be of use to you.

Hi dave96, that is indeed what I’ve been trying to do.

My chain is as simple as possible; I have an AudioTransportSource wrapped as a processor, and am connecting that to my offline writer class.

And so, to answer one of your questions directly: “Does this work if you just render unmodified audio to the file?” Indeed, the writer is successful at creating an audio file and writing to it, but the result is undesirably and unintentionally distorted from the original.

I’m not using a ThreadedWriter as I decided against it to keep things even more simple from a design standpoint for future things…

I shall try that method out.