FR: Add Sample-Accurate Parameters

This is something that is currently missing and was requested many times.

Implementation should be totally additive and non-breaking, and should ideally be as close as possible to the way MIDI messages are handled with a list of events and sample positions.

Example implementation:

void processBlock(AudioBuffer<float>&, MidiBuffer&)
{
    //getParameterEvents() is guaranteed to be updated 
    //for the current block, similar to getPlayhead();
    for (auto& e: getParameterEvents())
        handleParamChange(e.param, e.value, e.samplePos);
}

For older formats like VST2, the list should have all the parameters at sample #0.

Current parameters behavior for param.getValue() etc should remain exactly the same to ensure compatibility with UI/Listeners/APVTS/etc.

See (one of many) older threads about the implementation:

+1

it would be even better if we can get also helper classes, similar to juce::SmoothedValue to generate interpolated envelopes between the sample accurate parameter changes.

2 Likes

Careful.
Sample accurate automation: yes please, but not like it was suggested here. Imagine any continuous automation, any ramp or curve. The suggested interface would lead to terrible performance and memory access patterns.

Take a look at how it works in VST3. As far as I know, the actual curve segment data is provided to the plugin and can be accessed and evaluated (or not) by the plugin. I don’t know how other plugin formats handle sample accurate automation, but if there are differences in the specification and features of curve segments, it could be problematic or impossible to provide a common interface for all of them in JUCE. The best option in that case would be to expose the format specific data, or some other sort of compromise.

4 Likes

Great point, I believe those can be supplied by the common interface still.
For example:

getParameterEvents().getParamValueAt(samplePos);

Which would internally interpolate if needed.

Or alternatively, have the VST3 wrapper write a bunch of parameter change messages into the array that represents the curve if one is available.

I’m not sure how it works now, but I’m guessing the JUCE code already does something similar so that at least the single value at sample pos #0 would be correct and represent the curve?

2 Likes

I’m really not sure why that would be the case, can you please explain?

Currently your plugin is able to receive similar events with MIDI, and since you’re passing them as an array (MidiBuffer) I don’t see why the performance would suffer.

You absolutely don’t have to split your blocks into smaller sizes when a sample accurate automation arrives or anything like that.

Just like you don’t have to do it in MIDI, you can choose to interpret those messages (or have JUCE read it for you) either as a vector of interpolatable events/points, or flattened into some array of floats.

I’m fairly sure that’s what VST3 is expecting you to do right now - you certainly don’t have to implement `juce::Synthesiser" and split the block for every possible parameter message, but instead parse the events as given to the host into a format that makes sense for your processBlock() code to read without taking a performance hit.

1 Like

Hi!
In case of “one MIDI event per sample” it would be just as bad, that is correct. But, usually MIDI data is not that dense. You can’t even transport 48000 events per second using a HW MIDI cable :wink: In case of ramps or similar, DAWs “thin out” the MIDI events that are generated from that.
Sample accurate automation has, at least in case of VST3, support for being sample accurate for ramps and curves, which is a good thing. For this to work efficiently, you need to transfer the data that matters, and only eval it where it matters, only if it matters.

This is not comparable to how MIDI works, it needs to be treated and handled in a different way, which is why I chimed in here. It has nothing to do with splitting processing blocks which came up as a sketchy workaround to increase parameter resolution due to lack of a better alternative.

So, If you get a bunch of curve segments, that is an efficient solution to transport the data. It’s as precise and accurate as you want it, and it works great for curves and ramps, which are continuous (value changes each sample). Which is probably why Steinberg designed it in this way.

Quote from the VST docs:

Performing sample-accurate automation requires even more overhead, because the DSP value must be calculated for each single sample. While this cannot be avoided entirely, it is the choice of the plug-in implementation how much processing time to spend on automation accuracy. The host always transmits value changes in a way that allows a sample-accurate reconstruction of the underlying automation curve.

Read more here: Parameters and Automation - VST 3 Developer Portal

I don’t have a good suggestion for a sensible abstraction in JUCE, but calling the for-loop you suggested for 50 parameters each sample is too costly. Because, In case of continuous (ramped or curved) automation, JUCE would somehow need to evaulate all active automation curves, produce a “parameter changed” event per sample, filling lots of large buffers, etc.
I’d prefer to get access to the curve segments instead and then decide what to do with it, without producing such overhead.

JUCE already evals everything at sample 0 in the audio buffer (which replicates vst2 behavior). Having the option to eval more often and getting a list of the points in the buffer where the curve changes could be a way forward, plus a helper for evaluating the curves on JUCEs side.
I can’t specify the whole thing spontaneously in a comment, it seems deep and complex and full of special cases.

tldr, Just saying, this needs more due diligence, looking at the solutions for sample accurate automation in the various plugin formats should come first.

I might also be wrong, if the assumption that everything between automation points is ONLY AND ALWAYS ramped (no parametric curves, steps created by two events on the same sample), it would work to just get a set of events, similar to what you suggested, but I’m pretty certain that there IS support for parametric curves. :slight_smile:

1 Like

Based on the VST3 layout I proposed this interface once:

This would be backwards compatible and would allow to opt in to the control points inside the process block.
By default it doesn’t create any overhead, because instead of calling the default value for the whole block, you can simply get the value at sample=0.

The argument in the original thread, that people with zillions of parameters preferred a callback structure, that would only fire if a value has actually changed is IMHO a completely different setup, and if that comes up again, I think those people ought to write a new framework (rant over).

2 Likes

Ha, it has been discussed before.
I kept reading in the vst3 docs a little more just now: I’m in favor of the queue like structure that gives you the relevant control points for your buffer too. I’ve also just read that VST3 specifically constrains to “piecewise linear”, which makes things simpler (so: no parametric curves apparently). See here:
https://steinbergmedia.github.io/vst3_doc/vstinterfaces/classSteinberg_1_1Vst_1_1IParamValueQueue.html

If this works in a similar way in all other plugin formats, then I don’t see why not to add it to JUCE in this fashion.

All my suggestion says is you should get a list of timestamped events in each block that represent the values that came from the host in getParameterEvents().

Those fields should have exactly the values as they are in VST3, perhaps with some helper functions like interpolate them, but also an interface that lets you split the block for specific parameters, or create a modulation buffer, etc.

Yes. My suggestion is the data structure would be as close as possible to being identical to the VST3 structure.

The main thing I wanted to point out in my suggestion, is that the parameter system, parameter listeners, APVTS, etc, shouldn’t be redesigned and the change should be additive and opt-in.

4 Likes

Ah okay, I misunderstood your proposal then. Maybe make it clearer that “piecewise linear / ramped segments” is the underlying assumption.

With some fleshing out of the details I’m all in favor then :slight_smile:

How does it work with AU? What about AAX?

+1 on opt-in sample accurate automation of parameters

it wouldn’t be like that. i think it would be more like this: you start processBlock by getting the iterator of your parameter values, then you’d start your sub-block processing. whatever 32 or 64 samples or whatever your optimization needs. then you’d iterate the paramter values to the point until it’s not within the current sample indexes anymore and then start your subblock processing with those values and repeat for the next values, until you reach the end of the parameter buffer, process the last sub blocks and then processBlock ends. that’s how i see this approach being used realistically rn.

we will likely still have to implement parameter smoothing, simply because different DAWs might have different amounts of accuracy in how they transmit automation data to plugins, or simply because users make edgy automation and expect it to still sound cool. but at least we could make the smoothing much tighter for better transitional effects and we could add some more fancy quirks, like certain parameters that update on exact temposync beats of the project for more of a step sequencer kind of effect and stuff like that

Sample accurate automation has nothing to do with smoothing.

Smoothing is something you need to do internally to make your parameter changes sound good. The host doesn’t know about what your parameters are doing and which of them needs to be smoothed.

Sample accurate parameters are to:

  1. Be able to make a song sound the the same in a buffer of 64 and 1024 (which it currently won’t).

  2. Allow using LFOs and other kind of modulators.

  3. Get precise timing of messages where the plugin will respond differently before and after the change was received.

For example: patch selector on a synth, pattern selector on a sequencer
 anything that if missed even by one sample will trigger the wrong behavior.

4 Likes

Sample accurate automation is super critical for an example case where you have a bypass button that automates to non-bypassed at the start of a song section. Without sample accurate automation, that bypass switch will be switched either early or late, which would sound weird. Especially at large buffer sizes

3 Likes

Exactly. The first time I ran into this problem was with a sampler plugin that had a parameter to change the sample being played.

The bug report I got from users was “I automated the button to be pressed before the midi note in my session, yet it was only processing after the MIDI note”.

What I ended up doing for years after that was to change anything with strict timing requirements to use MIDI.

1 Like

yeah i was thinking more practically. imo sample accuracy is also about being able to smoothen less hard, since you don’t need to make it sound good on really large buffers anymore, but you can just decide yourself how it should act, you know

That’s not why you’re smoothing though.

You’re smoothing because the user’s handle on the controls (using both a MIDI controller, the UI knob or the host’s automation lane, an OSC remote message, etc) might move too fast or have ‘jumps’ too large for your DSP component to sound good in during the transition.

Since the user’s hands on the controls aren’t gonna be any more sensitive with sample accurate parameters, you’ll have to smooth exactly the same way.

Somewhat related to smoothing: Along with sample accurate automation, it would be good to have another API call in JUCE to query whether the current host/plugin format actually provides sample accurate automation data.

One of the nice things of getting sample accurate automation support is that a plugin can actually know if a parameter is ramping or switching. Without it, a plugin may want to fall back to a compromise scheme (I use some sort of exponential smoothing here and there). Depends on the type of parameter and what it controls of course.

1 Like

I think you’re gonna have to use the existing parameter system (not sample accurate) as the fallback you mention. This is for things like the UI/preset loading and saving/etc when your state has changed but no sample accurate events have ever happened.

In processBlock(), you’re gonna need to read the first value in each process block anyway, and then use the sample accurate changes to see if something else happened during the block.

As said before, you’re gonna have to use your exponential smoothing anyway, because the UI is not gonna be sample accurate, and even the user’s sample accurate/automation/midi controls will likely not be smoothed in a pleasant way.

The only use cases where you can avoid smoothing completely is in something like a modular synthesizer where you can place the responsibility to smooth in the user’s hand who’s expecting that, or if the user really knows that they have to curve all their automation movements in a nice way (and never touch the knobs).

So in theory, you could redesign your plugins so there will be some interface that asks the user if they want to control each parameter with a knob on the UI or with their own smoothing system. I suspect that will be very difficult on the plugin dev’s side, because now isSmoothing(parameter) is yet another piece of state you have to mange, have windows to edit, etc


Are you thinking of isDiscrete()?
This determines in a proper host, if values are ramped or jump to the value at next change.

Weirdly it is missing from the RangedAudioParameterAttributes, so you have to inherit instead of just setting it in the Attributes on creation.

But this leads now really far off topic