[DSP module discussion] Structure of audio plug-ins API

I’m the same way, but that’s more due to a lack of features than anything else. I’m just saying that having an extra tool to create and manage the bulky parameter objects, or whatever their future implementation looks like, or even just having a tool for creating/deleting/editing parameter boilerplate in the processor would be nice.

The way RackAFX handles it is pretty great, and makes prototyping much easier than doing it exclusively in JUCE.

I’d be interested to know what you are missing with the AudioProcessorValueTree(State) class.
I like it very much but had to extend it to fit my needs. (to allow for custom parameter attachments and for saving user settings into a separate file)

Features like the ones you requested can be helpful for rapid prototyping. When it comes to optimization though at a later stage they can reveal themselves as heavy road blocks. A very well thought design is required to avoid such pitfalls and sometimes it is even better not to have them in the first place because they can lead you onto a wrong track in development.

1 Like

I don’t hink it would be a good idea to use the midibuffer for audio plugin stuff.

When do you need this event queue? Whenever the host changes a(n automation) parameter, I think it should reflect in the plugin as soon as possible, and if that’s not doable and it’s overwritten by a new value, then your plugin’s just to slow for the task.

If I change some parameters very quickly I most probably want it to show directly in the plugin (not a second later when it’s got som spare time to empty its event queue…)

Not sure (2) can be done without changing the plugin API itself, and not sure it is a good idea to put that level of interaction inside the DSP loop. It’s hard enough that there would be lots and lots of plugins that wouldn’t implement this properly.

Yeah… RackAFX is probably one of the worst frameworks ever written. The API it generates is everything you DON’T want to have in a DSP loop. Avoid it at all cost.

Hmm, what about FloatVectorOperations::enableFlushToZeroMode() and FloatVectorOperations::disableDenormalisedNumberSupport()?

What’s the preferred way of dealing with (de)normalisation? Or is JUCE_SNAP_TO_ZERO not about this?

Usually, I use the Projucer for creating the project and every time I need to add a new source file, or to update some information like the version number, so I can work on Windows, and use the Projucer file with Mac OS when I want to compile the project there.

Honestly, I have never done things like that yet. Our discussion showed me that we need to take a special care of automation in the case of the audio buffer size being high. I don’t know yet what would be the best approach for this, so I’ll read all the comment here with attention !

Didn’t know about this issue ! Well if there is something like in a plug-in, for me it’s more about a bad design choice from the coder or the DSP engineer than something related with the API… That only means that the parameter changes are not handled properly to remove the change artefacts. You would need to filter some parameter controls, or to use classes such as LinearSmootherValue or dsp::StateVariableFilter… By the way, the class StateVariableFilter is inspired from the book of Vadim Zavalishin so I guess the last versions of Reaktor don’t have this issue anymore :slight_smile:

I don’t remember exactly and I guess that extending it might have solved some of my issues. What I do remember is that I love to create plug-ins with meta parameters for research purposes, which allow me to put some very different algorithms in the same plug-in, each one having its own set of parameters. And for this kind of projects, with the set of parameters itself changing during run-time, it might me more convenient not using the AudioProcessorValueTree class.

I think I use that define since I saw it in the original JUCE IIRFilter class. It is obviously about denormalisation. About FloatVectorOperations::enableFlushToZeroMode() and FloatVectorOperations::disableDenormalisedNumberSupport(), they work only on computers with the SSE2 instructions support, so I have never tried yet to use them instead of JUCE_SNAP_TO_ZERO. Maybe it’s a kind of a paradox to ask users to set the flag for C++14 if they want to use the DSP module, and to think about pre-SSE2 computers, I don’t know…

I don’t think that’s what was meant: I think that the reference to MidiBuffer was because the MIDI messages added there are timestamped, hence every time a processBlock() gets called, it can snoop into the MidiBuffer it received and know at which point each of those MIDI messages should be processed during the current audio buffer.

Doing so for automation would probably mean having some sort of similar array, containing tuples with these information: (parameter, value, timestamp), so that the correct value of each changed parameter can be applied “sample accurately” during each processBlock(), and not only between calls to it.

If that’s of any help, If I remember correctly, VST3 has a different approach for parameter changes during processing: it assumes that parameters change linearly between a initial and a final value around each processing callback.
With that in place, having parameters that only change between two subsequent callbacks is easily implemented by having automated parameters with equal initial and final value for each callback.

2 Likes

Maybe I was expressing myself in an overcomplicated way. Here is an example: Imagine your plugin consists of a synth and an added reverb effect.
All im saying is I would let each the synth and the reverb listen to their own parameters instead of having the main processor do it.

Sure! Thats why I said the old reaktor days (of version 3), that was before Vadim joined NI. He is a great teacher btw and one of my heroes.

VST3 also allows events to be in the middle of a sample buffer. That’s why they are delivered by the host as a queue of time stamped events. (vst::IParamValueQueue)
Here is the image from the vst3 docs that explains it:

automation

4 Likes

I think there’s a good point in snapping to zero regardless of any de(normalization). With snapping you, zero out any sound below 1.0e-8f (-160dB ?) and possibly saves cpu cycles by cutting the tail when it’s not audible anyway. Quite possibly you could snap at a higher value as well without hearing any artifacts…

That won’t save any cpu cycles if your processor does not switch into an idle mode at the same time. That of course also requires some effort and needs to be implemented somehow.
Also you need to be aware that you will have to employ the denormalization routine within each and every feedback iteration that converges to zero. Preventing the cpu from switching into denormalized mode by setting a flag is a much cheaper and cleaner way to solve the problem.

That said I would still advocate to use a denormaliization macro anyway. It can easily be redefined to do nothing when it is not needed in a particular build.

Yes, you certainly have to have a use case which uses less cpu when silent than when …not

My case is a multi voiced synth. In release mode the voices decays exponentially to zero, which it (theoretically) never will reach. If I snap to zero I can end the release mode at an arbitrary point, even before the levels have reached the denormalization swamp, at e.g -80, -90, -100dB or whatever feels appropriate. After the release mode the voice is idle and doesn’t draw any cpu cycles.

If you have a look in the develop branch, you’ll understand why I created this topic, and why I created all these high order lowpass filter design classes :slight_smile:

1 Like

are your filter classes DF1 Biquad 32-bit or are they SVF?


Obvisouly, IIRFilter is not SVF, that would be… StateVariable Filter.
Then looking at https://github.com/WeAreROLI/JUCE/blob/master/modules/juce_dsp/processors/juce_IIRFilter_Impl.h, I would say TDF2.