How to emulate live rendering of Synthesiser in tests?

Hello fellow JUCErs,

 

I am in the process of writing performance tests for my sampler. I want to check different implementations of DFD automatically.

What I don't know how to achieve is how to emulate the calls to AudioProcessor::processBlock (buffer, midiMessages) as they were done real-time/live. I will use a midi file to drive sound (filling the midiMessage buffer on every call to processBlock), but how to implement the functionality calling for the next block?

Has someone written such test?

 

Thanks in advance for any responses.

I think you got it backwards.

You want to emulate real time perf, so your audio needs to be real time

What you should do is simulating the MIDI input, so you can have a reproducable input.

The way I want to simulate the MIDI input is by filling the midiMessages buffer with the corresponding (in time) events from a midi file.

But how do I get the infinite loop calling of processBlock as it happens on the PluginProcessor?

Do I have a way to add my test object as a callback to the currently running audio device (or whatever it's called)?

Looking at the demos, if my code runs in a standalone app environment, I can register an audio callback like this:

deviceManager.addAudioCallback (&audioSourcePlayer);

How do I do the same thing in a plugin?

I will use the getNextAudioBlock method only as a "clock" - I only need the caller of this method to pick up the right time to invoke it and the proper block size to invoke it with.

My idea is to see if N voices are running for X seconds, how many buffer refills will the DFD skip. I don't want to hardcode something, so I though in the form of a test would be more appropriate.

The DAW calls your plugin.

You can run your plugin in your own standalone application if you want though.

In that case, you can check the audio plugin host examples to quickly wrap your own VST host.

How do I do the same thing in a plugin?

You don't. You create a plugin class, which needs to inherit from AudioProcessor. Then, in there you implement processBlock, which is the audio callback the host will call:

class MyAudioPluginClass : public AudioProcessor
{
    void processBlock (AudioBuffer<float>& buffer, MidiBuffer& midiMessages) override
    {
        // audio code goes here...
    }
};

To hook this class up with the host you have to declare & define this function in your code:

AudioProcessor* JUCE_CALLTYPE createPluginFilter()
{
    return new MyAudioPluginClass();
}

This mechanism replaces registering an audio callback in the standalone case.

 A plugin also doesn't have a deviceManager or anything like that. The host is taking care of all audio and MIDI inputs/outputs and just calls that processBlock function if it decides the plugin should render some audio. The plugin doesn't have any control over if, when, or how often that happens.

This takes me a few steps closer to my goal.

So what I thought I can do, is create a base class for my tests which will only be inherited from tests wanting to be called on processBlock. I can add a #if RUNNING_UNIT_TESTS block in the processBlock method of my PluginProcessor and call the same method on every unitTest (the running one, not call all of them).

But then I immediately remembered my tests are running in a different thread... Even if I message the tests thread, it will not be the same as running in the same thread... So I'm guessing that's the end of this plan.

These 2 questions remain:

  1. Can I somehow emulate the algorithm deciding what will the next blockSize be and when to call processBlock within the test itself?
  2. For offline rendering testing, I am guessing it should be entirely synchronous (the sampler should be blocking the thread until it has all the data) and I can just make a loop with a fixed size of block (or better varrying randomly, within a range), right?

Not sure I understand the stuff you want to do... but messing with threads and things like that sound like stuff you should not be doing from within an audio-plugin.

And you should never ever block the rendering thread because this will actually clog the whole host and all other plug-ins it's running as well!

In the Demo app, there is a UnitTest demo. I've taken that demo and I add the visual component it defines in the PluginEditor (this is only compiled if a global variable (processor definition) SHOULD_RUN_UNIT_TESTS is set to 1). And from that component I start the tests in a new thread (as in the demo). So my plugin's UI is replaced by the UnitTests UI if I set the global var and runs regularly if I don't.

What I am trying to do:

  1. to initialise the Sampler I am building (inherits JUCE' Synthesiser);
  2. to render its audio data in real time;
  3. see how my Direct From Disk streaming is performing (how many times the DFD will be late to deliver a chunk of samples for a given period of time/streaming buffer size/number of voices/etc.).

Does this explain it?

As much as I can see from the code of the Plugin Host - the DAW isn't responsible for deciding how big the next block will be and when precisely should it request it from the plugin. CoreAudio (or whatever the system audio is called on Mac) seems to be calling back and the DAW simply passes this call down the chain? Is this correct? The DAW seem to only configure the settings of a system device abstraction which implements the algorithm of continuously calling for samples?

 the DAW isn't responsible for deciding how big the next block will be and when precisely should it request it from the plugin

No, the DAW is responsible!

You are right, that ultimately the DAW also has its own audio callback that is owned by the OS. But all DAWs I know have their own settings for block size, sample rate etc. and don't use the system one. Also, even if the main audio is running, DAWs still decide whether to call the plugin or not (maybe the plugin is bypassed in the DAW)? Also, DAWs may use another block size for plugins than for the "outer" audio callback, or even varying block sizes. Some DAWs are doing that. 

And for offline rendering it may use completely different settings than for real-time rendering! Offline rendering will be completely independent from the OS's audio callback. The plug-in doesn't know whether the DAW is doing that or not. As a plug-in, you are really at the mercy of the DAW and any weird stuff it decides to do with you!