[DSP module discussion] Structure of audio plug-ins API

I don’t think that’s what was meant: I think that the reference to MidiBuffer was because the MIDI messages added there are timestamped, hence every time a processBlock() gets called, it can snoop into the MidiBuffer it received and know at which point each of those MIDI messages should be processed during the current audio buffer.

Doing so for automation would probably mean having some sort of similar array, containing tuples with these information: (parameter, value, timestamp), so that the correct value of each changed parameter can be applied “sample accurately” during each processBlock(), and not only between calls to it.

If that’s of any help, If I remember correctly, VST3 has a different approach for parameter changes during processing: it assumes that parameters change linearly between a initial and a final value around each processing callback.
With that in place, having parameters that only change between two subsequent callbacks is easily implemented by having automated parameters with equal initial and final value for each callback.

2 Likes

Maybe I was expressing myself in an overcomplicated way. Here is an example: Imagine your plugin consists of a synth and an added reverb effect.
All im saying is I would let each the synth and the reverb listen to their own parameters instead of having the main processor do it.

Sure! Thats why I said the old reaktor days (of version 3), that was before Vadim joined NI. He is a great teacher btw and one of my heroes.

VST3 also allows events to be in the middle of a sample buffer. That’s why they are delivered by the host as a queue of time stamped events. (vst::IParamValueQueue)
Here is the image from the vst3 docs that explains it:

4 Likes

I think there’s a good point in snapping to zero regardless of any de(normalization). With snapping you, zero out any sound below 1.0e-8f (-160dB ?) and possibly saves cpu cycles by cutting the tail when it’s not audible anyway. Quite possibly you could snap at a higher value as well without hearing any artifacts…

That won’t save any cpu cycles if your processor does not switch into an idle mode at the same time. That of course also requires some effort and needs to be implemented somehow.
Also you need to be aware that you will have to employ the denormalization routine within each and every feedback iteration that converges to zero. Preventing the cpu from switching into denormalized mode by setting a flag is a much cheaper and cleaner way to solve the problem.

That said I would still advocate to use a denormaliization macro anyway. It can easily be redefined to do nothing when it is not needed in a particular build.

Yes, you certainly have to have a use case which uses less cpu when silent than when …not

My case is a multi voiced synth. In release mode the voices decays exponentially to zero, which it (theoretically) never will reach. If I snap to zero I can end the release mode at an arbitrary point, even before the levels have reached the denormalization swamp, at e.g -80, -90, -100dB or whatever feels appropriate. After the release mode the voice is idle and doesn’t draw any cpu cycles.

If you have a look in the develop branch, you’ll understand why I created this topic, and why I created all these high order lowpass filter design classes :slight_smile:

1 Like

are your filter classes DF1 Biquad 32-bit or are they SVF?


Obvisouly, IIRFilter is not SVF, that would be… StateVariable Filter.
Then looking at https://github.com/WeAreROLI/JUCE/blob/master/modules/juce_dsp/processors/juce_IIRFilter_Impl.h, I would say TDF2.

Well, I can’t really say that Andy’s papers are that clear to me, and SVF just means “State Variable Filter” which is an analog circuit with one input and three outputs giving the lowpassed / bandpassed / highpassed version of the input signal. Most of the digital filter prototypes are based on it.

What you really mean is DF1 versus “a discretization of the SVF circuit which respects the topology of the original circuit and has a good time-varying behaviour when the parameters are modulated” :slight_smile: That’s the purpose of my new class StateVariableFilter, and I call the structure “Topology Preserving” because of Vadim Zavalishin’s paper (from which most of the maths is based on) :

I talked about all these aspects in my last ADC talk as well :

Otherwise, I was refering to new Oversampling class in the DSP module :slight_smile:

I’m only mentioning it because of this post:

EQ8 and the side-chain EQ to the Glue Compressor (and so the other compressor) are the only places I know for sure the algorithm is updated to the SVF I designed. The main reason Ableton got the update is because I wouldn’t allow a float-32 bit DF1 even on the side-chain input to the Glue Compressor device.

You can easily test for this yourself. Generate an 18 kHz sine wave at -1 dBFS, and then high pass filter it with a cutoff as low as possible, which is usually 30 Hz in Live since anything lower would be really horrible and possibly actually blow up instead of just being bad. Toggle processing and look at the spectrum of the results. The old 32-bit DF1 biquads add rumble at around -70 dB near dc. In devices that limit the lowest frequency to around 50 hz this rumble only gets to around -90 dBFS. Any signals that are -100 dB below another signal that is playing are pretty much inaudible. As long as you keep the cutoff frequency of devices above 200 hz at a sample rate of 44.1 khz (i.e. 400 hz at 88.2 kHz, 800 hz at 192 kHz etc) then you should be fine as the rumble generated is low enough to be below what you can hear. It is not always easy to tell the cutoff for devices that have a centre frequency and a width, like the delay filters, but the problem is still there.

His DF1 vs SVF pdf shows plots generated in Mathematica which corroborate that LF rumble observation.

Well, that’s an interesting study, but preventing a “LF rumble” at -70 dB has never been for me the main point in using the TPT structure instead of DF1/TDF2. I think it would be complicated not to use the old structure for some specific processes where we need a lot of control in the filter coefficients, if the filters are not plain lowpass / bandpass / highpass filters.

In most of the cases, which is what I said in my ADC talk, using the class IIRFilter is fine imho. It’s only for very specific cases the other approaches should be used instead (such as commercial EQs because of the Nyquist frequency behaviour mainly but also the quantization issues + LF rumble and whatever, filters with audio rate modulation etc.)

1 Like

What’s the advantage of DF1/2 over TPT?

Speed.
DF1/TDF2 are designed with a fixed frequency in mind. As such, the code can be dead simple.
For TPT/SVF, you take into account that the frequency changes, and you have to be more cautious, ending up with a little bit more computation and memory accesses.

Oh ok - I’d not counted the number of memory accesses. The code looked like it was pretty simple. I’ll go count :wink:

Filter frequencies seem to nearly always be changing in my world…

Your questions have drifted a little from the original topic, so let’s continue there : [DSP module discussion] IIR::Filter and StateVariableFilter

I’d like to return to this topic. We use this very approach in a product, but where the timestamped parameter update queue is available via a separate API. I think it’s very important for JUCE framework to find an elegant way of supporting this, with VST3’s sample accurate automation of course being the primare use.

1 Like

Regarding parameter changes during a bufferBlock, I implement a very simple idea: chop the buffer into chunks of 16 or 32 samples and iterate over every chunk until the whole buffer is filled/processed. Keep in mind that the bufferSize might be different each time, so keep a restValue, for the remaining samples.

void MonosynthPluginAudioProcessor::applyFilter (AudioBuffer<FloatType>& buffer, std::unique_ptr<LadderFilterBase> filter[])
{
    
    FloatType* channelDataLeft  = buffer.getWritePointer(0);
    FloatType* channelDataRight = buffer.getWritePointer(1);
    
    int numSamples = buffer.getNumSamples();

    
    //
    //  break buffer into chunks
    //
    int stepSize = jmin(16, numSamples);
    
    int samplesLeftOver = numSamples;
    
    
    for (int step = 0; step < numSamples; step += stepSize)
    {
        
        FloatType combinedCutoff = currentCutoff + smoothing[0]->processSmooth( cutoff.getNextValue() ) ;

		if (combinedCutoff > CUTOFF_MAX) combinedCutoff = CUTOFF_MAX;
		if (combinedCutoff < CUTOFF_MIN) combinedCutoff = CUTOFF_MIN;
        
		

        for (int channel = 0; channel < 2; channel++)
        {
                //filter[channel]->SetSampleRate(sampleRate * oversamp->getOversamplingFactor());
            filter[channel]->SetResonance(resonance.getNextValue());
            filter[channel]->SetDrive(drive.getNextValue());
        }

        if (samplesLeftOver < stepSize)
            stepSize = samplesLeftOver;
        

		if (prevCutoff == combinedCutoff)
		{
            if (filter[0]->SetCutoff(combinedCutoff))
                filter[0]->Process(channelDataLeft, stepSize);

            if (filter[1]->SetCutoff(combinedCutoff))
                filter[1]->Process(channelDataRight, stepSize);
		}
		else
		{
			filter[0]->ProcessRamp(channelDataLeft, stepSize, prevCutoff, combinedCutoff);
			filter[1]->ProcessRamp(channelDataRight, stepSize, prevCutoff, combinedCutoff);
		}
        
        prevCutoff = combinedCutoff;

        samplesLeftOver -= stepSize;

        channelDataLeft += stepSize;
        channelDataRight += stepSize;
    }
    
}

…and Audio Unit Extension’s sample-accurate automation being another. IMHO Any professional Audio Plugin should support sample-accurate automation.

For an overview (with diagrams) of why non sample-accurate parameter updates are prone to serious artefacts, refer this post…

Again, I’m not that sure that it’s the plug-in duty to handle this. I mean, if the DAW changes the value of any automated parameter at audio buffer size rate, having your plug-in internally working on lower chunks will do zero difference ! I had this kind of issue and thought already a long time ago about the handling of MIDI events such as tempo changes, and unfortunately or fortunately depending on the viewpoint, the plug-in developers can’t do anything there and so they shouldn’t care about that issue.

However, if there is a change in the plug-ins format APIs which provides additional control on this, and if the DAWs communicate with it, then yes we should change a few things in the way we code plug-ins, the JUCE users, and the JUCE team to provide compatible changes in the AudioProcessor class.

And another case is the specific one with your plug-in parameter modulated by something happening inside the plug-in itself, such as a LFO modulating a delay line or a filter or an oscillator… In these cases, when audio rate modulation is needed, the developer needs to provide a way to handle this, such as adding a buffer somewhere which can be reached in the process function containing the LFO values for every sample. This can be done with a standard process function taking the audio samples as an argument, or an extended one with one additional argument containing the automation information.