Offline Rendering


How do sequencers render offline faster than real time.

Ex. Is tempo doubled and then samplerate divided by two, etc.?

How would you do this with juce audio engine?


How does the audio plugin host example actually tick

Samples are not bound to wallclock time. You can render them as fast as possible. 

You don't have to change anything, all DSP works as close to realtime as it gets - your host then schedules your rendered samples to your audio device, which emits them in time to generate an analog signal.


Let me be a little more specific.

Let's say I have host code that plays a Midi file through 16 plugins one for each MIDI channel at a tempo of 120 bpm.

I want to save the audio output offline at 2x, 4x, 8x as fast as real time.

The processblock code has

pThreadWriter->write( buffer.getArrayOfWritePointers(), buffer.getNumSamples() );

So how can the samples be produced faster without speeding up the tempo?


Usually the host just continously calls processblock without having to think about syncing it up with the audio interface, that is the main reason why you see the speedup, afaik.


The result is that the project can be rendered much faster than realtime rendering. The rendering speed is depended on every plugin in the project, so still if you make some "magic" offline algorithm other plugins can easiliy turn that 10x speed, with your plugin alone, back down to 1.5x if they are adding other high resource plugins in their project.


Thank you but either I am not getting it or you are not getting it. I am not sending audio but am sending time stamped MIDI to plugins who then send audio samples.

Say my program sends a MIDI note on to a plugin and then a MIDI note off one second later and does this 60 times in a row.

That takes 60 sec sec from start to end.

processblock writes out each set of samples it receives during this 60 sec, so it still takes 60 secs to create the file.

How can I speed this up to takes less time, like 10 sec?


It's you who isn't getting something, (or not explaining your question very clearly?) but I'm not sure what you're missing!

If an audio device or plugin host calls processBlock then of course it'll do so at a fixed interval.

But if you're rendering then you don't use an audio device or host, you just call processBlock directly, repeatedly, in a loop. You don't have to wait in between each call, so it'll go as fast as the CPU can do it. Is that what you mean?


It's simply a matter of how quickly 60 seconds worth of data can be processed.  When working with a live audio stream that 60 seconds of data will of course take 60 seconds as you want to hear the sound properly.  But when processing offline it's just a matter of how fast those 60 seconds of data can be pushed through.  So if you're processing offline then you just loop the processing code with a thread and let it go as fast as it can.


So you are saying that if I have a 1 msec timer from my audio thread that I use to calculate tempo, which in turn determines how often I send the MIDI data, I could just send 4 times as many events during this and would write the file 4 times as fast.

Well tried this and the created wave file is playing four times as fast. It does not work.

I think you are baseing your thoughts on processing audio not audio created from the  plugings that process MIDi input into audio.


If it's a simple bounce from your own plugin/application then you're in full control to do whatever you want, I'm still not sure I understand you, good luck either way :)

in your latest case, to answer your original question, you can process e.g. 4 times as much data in that millisecond, but what happens if your app expects a multiplier of 4 and now and then the cpu can only manage to process 2x or 3x until you get your timerCallback?


Huh? Either you do understand it, and have explained yourself very badly here, or you're completely muddled! What you just said doesn't seem to bear any relation to any advice that we gave you in this thread.. (!?)


You should never ever mix sample rendering and "wallclock time" - and never use timers together with DSP. You get the tempo from the host, and then you should calculate how many samples it takes to give a 1 msec delay (or what it would corrospond to). This way, your code can render at any tempo and always be perfectly in sync.


Please don't say it is a simple bounce. This is a host sending data to many plugins receiving audio from them, mixing the audio, and then sending it to a file. If were simple so i would not be here trying to figure this out.

Has anybody actually done this. or am I just receiving theory.


No it's not quite.  The timing of your audio should purely be driven by the number of samples processed and the current sample rate.  The switch between realtime and offline then is just a matter of who is calling processBlock and how often.  It sounds like your issue is at least partly due to relying on that timer (given that it's 1 msec in real time) to determine when events should occur.


Sorry but this still does not help. If you push the MIDI data through to the plugins twice as fast you get samples that reflect this.

As for my example above. If you have a midi file with note on and note off one second apart and you send a note on and then a note off a half second apart to write the file twice as fast, the output is a file playing twice as fast.

Again has anybody tried this. I am thinking the actual samples have to be adjusted before being written.


I think maybe the problem is that none of us understand what it is you're actually trying to do.. Your descriptions are just really confusing.


The thing you continue to misunderstand is how to measure time. Stop measuring your time with real time. Measure time by number of samples passing thru processBlock, relative to sample rate.


I agree it is confusing and I am confused so let me try one more time with exactly how my audio engine works and maybe I should have implemented it another way and maybe someone can straighten me in the correct direction.

I have this as my audioDevice callback and it is running continously.

class CAudioProcessorPlayer : public AudioProcessorPlayer 
    CAudioProcessorPlayer() { m_playBackCounter = 0; }
    void audioDeviceIOCallback(const float **inputChannelData, int numInputChannels, float **outputChannelData, int numOutputChannels, int numSamples);
    float m_playBackCounter;

void CAudioProcessorPlayer::audioDeviceIOCallback(const float **inputChannelData, int numInputChannels, float **outputChannelData, int numOutputChannels, int numSamples)
    AudioProcessorPlayer::audioDeviceIOCallback( inputChannelData, numInputChannels, outputChannelData, numOutputChannels, numSamples );
    float    ticksPerMsec = pAudioEngine->GetSampleRate() / 1000;
    m_playBackCounter += numSamples;
    while ( m_playBackCounter >= ticksPerMsec )
        m_playBackCounter -= ticksPerMsec;
        pMidi->run(); //This is being called every msec

The pMidi->run() converts the tempo in to MIDI timing (480 ticks per second but is irrevelant) assuming it is being called every msec.

Of course this is real time and works fine in real time. When I write the file in real time all is well.

Is there a way to make render offline faster using this method


should I be calling this callback as fast as I can in another thread instead of the device manager calling it at the samplerate.

I apologize that I am not an audio guru.


Well thanks guys thank you for your help and for clearing up a few matters.

I got it to work using the above code, which is what I was trying to do without too much modification.

When writing to file I just loop the audioDeviceIOCallback loop 2x, 4x, etc.


Still really unclear exactly what you're trying to do, but that code isn't the way to do it. You definitely need to re-read this thread and think about it all afresh.


Yes I agree that my quick hack was not the correct method and I knew that at the time, but it lead me down the correct path of calling the callback faster, not speeding up the tempo.

After fiddling around for an hour or so I am now calling the audioDeviceIOCallback during my on thread and it is running as fast as the plugins can process the data. A 5 minute 4 track piece is taking me about 18 sec. Fast enough for my purposes.

What was confusing to me, was that I was not taking control of the callback and was relying on the normal callback at the set sample rate.

Thanks for the help and I now understand a little more about audio processing.