[DSP module discussion] Structure of audio plug-ins API

Hello guys !

During the development of the DSP module, a discussion subject came back very often which is about the structure of processing code, since we designed a few audio effects classes.

As all plug-ins developers here already know, JUCE doesn’t provide a full “structure by design” for processing code and audio plug-ins. I mean that you can’t just use the JUCE classes and provided functions to have a fully functional plug-in which can be commercially released. That’s not necessarily a bad thing since you get a lot of freedom this way on how you want to code your API, but it might be confusing for newcomers or when designing an audio processing class for JUCE in my opinion.

So, to develop a plug-in, you usually need to follow some of the steps below :

  • Create a new “audio plug-in project” in the Projucer, to get the PluginProcessor h+cpp created, and heriting from AudioProcessor class
  • Add some parameters in the constructor using the JUCE AudioProcessorParameter classes, or custom ones to handle properly the parameter changes (and automation, presets, undo/redo, load/save from the DAW etc.)
  • Fill the content of the AudioProcessor functions called prepareToPlay, releaseResources and processBlock.

I’ll be just talking about the audio/DSP side of things here, so let’s forget in this thread that we have a PluginEditor as well.

Now imagine you don’t want to code everything in these last three functions, but that you have instead processing classes + objects with functions to call everywhere. For example, you could use the JUCE filtering classes, any audio effect class you made etc. You want to init them properly in the prepareToPlay function, to set their internal sample rates. You want also to update other of their internal variables when the user moves a knob. And sometimes you just want to reset something, without updating anything else (imagine a meter component where the maximum amplitude can be reset with a mouse-click).

For these very reasons, I tend to have for every of my audio processors some additional functions, and at the end I got this :

  • A constructor + a destructor
  • The prepareToPlay function to set the internal variable sampleRate and the internal variable maximum size of the audio buffer. Sometimes, it is useful to use the internal number of channels information here.
  • The releaseResources function that I don’t use very often, which can be called in a destructor or when the processor is made inactive
  • A function to make the processor active or inactive
  • Some functions to set/get the interface parameters (probably some AudioProcessorParameters or anything atomic, like a cutoff frequency)
  • A reset function to reset some internal state variables (like a filter state variable such as the last value of the output)
  • An update function to set the new values of the internal variables (like a filter coefficient) when the interface parameters have changed
  • The processBlock function to do the actual processing, using the internal state variables and internal variables

Since I don’t like to write the same code several times (DRY principle), the internal variables can only be updated in the update function, not in the constructor or anywhere else. My set function sets also a boolean called mustUpdateParameters to true. This boolean is set to false in the update function. The prepareToPlay function does extra initialisation and calls update + reset at its end all the time. And the processBlock function checks the state of mustUpdateParameters and calls update if needed. And we can also optimize a little what happens in the update function, to prevent all the updating code to be called all the time, but only when a specific parameter has changed for every call…

So that’s what I do. What do you think of this approach ? Do you use something like that as well ? Any suggestion of better practices ?

I hope I will get a lot of feedback here since all your answers will probably be read and studied for the next JUCE developments by me and the JUCE team :wink:

7 Likes

example code showing this design pattern?

I’m using a similar pattern indeed. I have a full_setup() method that would reset the internal state, a setup() method that would only recompute the state (called by full_setup, of course) and setters/getters for parameters (as well as input and output sampling rate).
Making something active or not in made by changing what is connecting where, and there is only one call to compute the full pipeline.

@matkatmusic you mean this?


and https://github.com/WeAreROLI/JUCE/tree/master/examples/DSPDemo

Here is an example. This is the most simple filter class that could be done in JUCE using the submitted structure. I have simplified a lot of things there so you could get my point. For example, I could have used an AudioProcessor class as a base class with interface parameters being AudioProcessorParameters, or I could have used AudioBuffers / dsp::AudioBlocks in the processSamples function, or the new ProcessorWrapper / ProcessContext classes.

// ==============================================================================
/**
    Most simple first-order TDF2 lowpass filter class in the world in JUCE
*/
class VerySimpleFilter
{
public:
    // ==============================================================================
    /** Constructor. */
    VerySimpleFilter()
    {
        cutoffFrequency.set (20000.f);
    }

    /** Destructor. */
    ~VerySimpleFilter() noexcept
    {
        releaseResources();
    }

    // ==============================================================================
    /** Sets the value of the cutoff frequency (interface parameter). */
    void setCutoffFrequency (float frequency) noexcept
    {
        jassert (frequency > 0 && frequency <= static_cast<float> (sampleRate * 0.5));

        cutoffFrequency.set (frequency);
        mustUpdateParameters = true;
    }

    /** Returns the value of the interface parameter cutoff frequency. */
    float getCutoffFrequency() noexcept
    {
        return cutoffFrequency.get();
    }

    /** Sets the activity status of the filter class. */
    void setActive (bool newValue) noexcept
    {
        isActive = newValue;
    }

    /** Returns the activity status of the filter class. */
    bool getActive() noexcept
    {
        return isActive;
    }

    // ==============================================================================
    /** 
        Initializes the processing by setting the sample rate and calling 
        updateParameters + reset
    */
    void prepareToPlay (double newSampleRate, int maximumNumberOfSamples) noexcept
    {
        jassert (newSampleRate > 0);

        sampleRate = newSampleRate;

        updateParameters();
        reset();

        isReady = true;
    }

    /** Releases the resources used by the class. */
    void releaseResources() noexcept
    {
        isReady = false;
    }

    /** Resets the state variables of the filter. */
    void reset() noexcept
    {
        v1 = 0.f;
    }

    // ==============================================================================
    /** Processes an array of samples. */
    void processSamples (float *samples, int numSamples) noexcept
    {
        if (isActive && isReady)
        {
            if (mustUpdateParameters)
                updateParameters();

            for (auto i = 0; i < numSamples; i++)
            {
                auto input = samples[i];

                auto output = b0 * input + v1;
                v1 = b1 * input - a1 * output;

                samples[i] = output;
            }

            JUCE_SNAP_TO_ZERO (v1);
        }
    }

private:
    // ==============================================================================
    /** 
        This function is called to update the internal variables of the filter when 
        the interface parameter has been changed.
    */
    void updateParameters() noexcept
    {
        mustUpdateParameters = false;

        auto tanw0 = std::tanf (cutoffFrequency.get() * float_Pi / static_cast<float> (sampleRate));
        auto tanw0plusinv = 1.f / (tanw0 + 1.f);

        a1 = (tanw0 - 1.f) * tanw0plusinv;
        b0 = tanw0 * tanw0plusinv;
        b1 = b0;
    }

    // ==============================================================================
    bool isActive = true;               // tells if the filter class is active
    bool isReady = false;               // tells if the filter can be used (if prepareToPlay has been called at least once)
    bool mustUpdateParameters;          // tells if an interface parameter has been changed

    // ==============================================================================
    Atomic<float> cutoffFrequency;      // interface parameter
    
    // ==============================================================================
    double sampleRate;                  // sample rate (internal variable)
    float b0, b1, a1;                   // filter coefficients (internal variables)
    float v1;                           // TDF2 structure state variable

    // ==============================================================================
    JUCE_DECLARE_NON_COPYABLE_WITH_LEAK_DETECTOR (VerySimpleFilter)
};

So this thread is not so much DSP module related after all…? /confused/

I am eager for insights about what’s in the module, but a high level overview is missing…
But thanks Ivan and JUCE for the module! I can’t wait to get my hands dirty, just don’t have the time right now…

@daniel It is more about the making of the DSP module, and the next iterations of it + JUCE :wink: This kind of questions have been asked a lot of times to develop the API of all the new classes.

if you want more high level information, the JUCE team will put additional information soon on their website. I’ll create as well some new more pratical threads here and some new entries on my dev blog about the development and the philosophy of the DSP module.

Moreover, these considerations are important I think for you guys to structure efficiently your code, to use some DSP module classes such as the filters, the Processing classes or the Convolution class in your next audio plug-ins :wink:

The structure code you can see above could be the one of your main PluginProcessor files, with the internal variables being some dsp::IIR::Filter or dsp::Convolution objects, and you would need some calls to their init/set functions as well. Where would you put them ? That’s the main topic.

Link please…

Thanks,

Rail

1 Like

Which platform are you using? And it’s probably better to create a new topic for this question!

With this approach, your parameter control rate is tied to the block size. That works fine if the block size is 16-32-64 samples but any higher and it might not be smooth enough. Inversely, if the block size was lower, you might be updating too often and therefore it’s less efficient.

Is that a concern or does a developer need to worry about that really? I know in Csound (and I guess Reaktor) they have different rates for control signals.

It’s something I’ve always been conscious of but it’s hard to make an elegant (and efficient) solution for.

As the parameter change is given by the DAW (by default, you could have a smoother on top), I don’t think the API has an issue.
What would be your proposal?

Well, you could have a smaller internal block size if the host block size was too high. So if the host block size was 128 and you wanted to update parameters/control signals every 32 samples, you could re-run (and therefore update the parameters) the process block 4 times at 32 samples internal block size. It’s a bit worse for cache performance I’d imagine but not sure the full extent.

Like wise, if you’re aiming for a host block size of something like 16 samples. You might only want to update parameters every 2/3/4 callback. So you’d have to track that how ever which way you wanted.

As I said, I’m not sure if that’s something to really worry about but it does affects your high end and low end customers in different ways.

My question was about why this would change the API? This is something that can be added on top of any filter and doesn’t change the behavior of the filter itself.

Fair enough, I wasn’t really talking about the API.

One thing I don’t like currently is adding fixed (number of) parameters at runtime via the constructor. This should all be resolved at compile time. The AudioProcessorValueTree is not great for this. You end up having to search by index each time (which is slow) for the processor or else keep a pointer to it. I find it a bit messy. Also the value to text / text to value mechanism is not something I love but maybe there is a reason it’s that way.

I’d say most of the annoyances come from managing parameters in general.

Agreed. The structure itself is nice to interact with the GUI, but it’s a pain to use in the DSP code because you can update your parameters easily from it.

You’re right, in the specific case of parameters automation for example we might get something wrong by using only this approach is the audio buffer size is high. I don’t know if a lot of commercial plug-ins use a solution for this specific issue, such as something to put some time information on the parameter changes, so they could be synchronized with the audio data. I know I had some problems like this with MIDI as well when I tried to do a MIDI file player because the DAWs have all a different way to handle the tempo changes…

But as Matthieu said, a solution to this problem could be used without changing anything to my filter code API. The top PluginProcessor could handle everything and call the filter process function for every 8 or 16 samples for example…

About the low buffer size issue on the other side, I’m not sure it’s that wrong to just do a if statement every 8-16 sample. Again, the update functions call is done only when mustUpdateParameters is true.

And for audio rate parameter change, for example if there is a internal LFO somewhere, I would do things differently, to save CPU for example, but that’s another subject, and it wouldn’t change anything in the structure of the API :wink:

Well I have never used the AudioProcessorValueTree in a plug-in yet, because I can’t do some of things I use with the other approaches this way. But I agree, it’s also what I meant in my first message here with “no JUCE plug-in structure by design”. With JUCE now we have a lot of freedom for coding plug-ins but some parts might be too complex or not efficient enough yet, such as the handling of parameters. I know the JUCE team agrees with that too, and the talks here might help everybody to improve the next iterations of the SDK :wink:

4 Likes

My approach to dsp module programming is very much in line with the example you have given. With a few additions:

  • The main processor is not necessarily the only parameter listener. Where appropriate an included processor module is derived from the pure dsp module implementing a parameter listener interface of some kind.
  • With respect to sample accurate parameter automation processing I would very much love to see a parameter change event queue that could ideally even be integrated and interleaved with the midi buffer. The main processor process block function is responsible for inserting the event at the appropriate position in the signal chain thus avoiding unnessessary fragmentaton of the process buffer in the other stages.
  • I am not a friend of control rate processing because in my experience that can introduce nasty artefacts. I remember the old reaktor days when the spectrogram often had a peak at 400Hz or 800Hz depending on the selected control rate. There are cases of course where a continous parameter update would produce to much cpu overhead and a recalculation every n steps is required (e.g. lfo controlled filter). In such situations I usually introduce some random jitter to break the mode.
1 Like

I know there may be some resistance to relying on the Projucer, but I’d love a feature in it like RackAFX where the tool creates and manages the parameter boilerplate for you.

Something else to consider, I don’t know if you’ve ever messed with Reaktor or analog synths, but something that might be cool is treating ‘parameters’ as control signals rather than fixed values. Then when you create your processor chain, you would be connecting audio outputs to audio inputs, and control signal “generators” to cooking function blocks and control signal inputs.

Didn’t see it mentioned here yet but something that would extend the functionality of the API and help with doing something similar (or allowing users a framework to do it) would be support for multi-rate processor chains. A common use case for example would be a dynamics processor with a decimated sidechain signal path.

I’m sure there are plenty of devs like myself who only use ProJucer to initialize their project(s) and never use it again (for the project).

Cheers,

Rail

2 Likes