Getting specific callbacks for each parameter change?

What do these lambdas do?

It… depends. On how many parameters you have, how many derived parameters you have, how complicated is their computation. For just a few parameters with no derived ones, using getRawParameterValue directly is the most reasonable choice. You still should try to load them into locals only once per callback, to avoid the atomics stopping inner loop optimizations. At a certain point, if you have too many parameters or too many derived ones or their computation is too costly so that doing it again on every callback is too much, you’ll need to cache your values in locations only used by processBlock and update them from parameterChanged through a fifo queue.

for each parameter in my plugin, there is a function updateParameter(). Currently, the GUI elements’ lambas are linked to the appropriate updateParameter() function, and my goal was to try and get parameter-specific callbacks so that I could forward these callbacks to just these updateIndividualParameter functions.

So it sounds like if I’m going to go the parameter listener + individual callback route, I’ll have to implement a FIFO, instead of just forwarding these callbacks to the processor’s updateParameter() functions where the new value is stored atomically?

If you’re using APVTS, having lambdas to update parameters from the UI to processBlock is not a good idea. UI elements update on the message thread, and automation comes from any thread. If you receive automation through the UI, you’ll get it late. Also, you may not have a UI at all (for example, when the editor is closed). You can’t count on the UI to do anything for the audio process -solve the audio process as UI-less, and plug the UI into it. In any case, you can’t update from one thread to the other without synchronization (at the very least, atomics).

There’s only one parameterChanged callback. You register for each parameter individually (probably with a loop in the processor constructor), but they all go to the same callback, which gets a parameter ID and a value. Without the need for synchronization, you could call your individual updateParameter() functions from there, or do it all there. But you do need synchronization.

If you go the fifo queue route, you need an MPSC one (because parameterChanged can be called from more than one thread). You need a key-value type like

struct Message { IdType id; float value; }

Then as a member of your processor

FifoType<Message> messages;

In parameterChanged

messages.push ({ id, value });

Then once per call in processBlock

Message msg;
while (messages.pop (msg)) {
    if      (msg.id == param1id) // update your audio-thread values here
    else if (msg.id == param2id) // ...
    // ...etc
}

You can use juce::Identifier as IdType. Ideally, param1,2…id should be juce::Identifier constants stored somewhere, to avoid Identifier constructions, which are expensive. Another option (which I use) is to map (with std::map) String ids to an enum, then use the enum as IdType. This allows to replace the if-else-if chain with a switch.

(edit) To clarify -if you’re using parameter values directly, there’s no advantage in having another set of atomics updated from listener callbacks -you’re replicating APVTS. In that case, just use getRawParameterValue. If you need to adapt parameter values for the process and it’s too costly to do it again for every block, that’s where you need a queue. For example, I have a set of frequencies, from which I need to compute the integral of the transfer function of a whole filter graph as a sum on 16384 points. Obviously I can’t do that again for every block :grin:

3 Likes

right, the UI → editor communication was meant to be so that the processor receives changes from the GUI immediately. That was never supposed to be the main or primary chain of communication for parameter changes.

I see that… Unless I make individual listener objects for each parameter, each of which can then just call myAudioProcessor::updateJustThisOneParameter() in its parameterChanged() callback instead of needing a giant if-statement to check the string ref…

And obviously, if I do go for the parameter listener callback method, I’d remove the GUI lambdas, there’d be no need for them anymore…

So far I just have all my actual parameter values stored in atomics at the audio engine level. A FIFO queue seems a bit overboard for my use case…? But maybe that really is what I need…?

this seems quite inefficient to me? It seems like this approach replaces “just update all parameter values once per processblock” with “FIFO queue to avoid unnecessary parameter updates, but any time there is a parameter update, you have to go through a switch/if with 50+ branches to find which parameter it is”. I would be fairly surprised if this wasn’t slow with a lot of parameters in your plugin?

Yeah, I’m getting the sense that trying to optimize-out needing to just check parameter values once per callback will end up adding much more complexity and overhead than it manages to eliminate…

It’s ok for things that are not included in your parameter list, but for parameters, that’s what APVTS is for.

If you don’t need to synchronize the changes of different parameters between them, probably it is.

That’s why I use an enum -to replace that chain with a switch, which is way faster. To make the comparison concrete: when you call getRawParameterValue, there’s a search by id in an std::map. I move that search to parameterChanged, store an enum (int) id, then use a switch in the audio thread.

I guess that’s why there’s not a solution for this in the framework -because, as I said before, “it depends” :sweat_smile:.

Yup, it’s a bit awkward. This thread from a couple years back tackled the same issue, might be worth a look:

1 Like

Daniel’s class works well as an attachment, but its lambdas are not safe to update values for processBlock -they’re either synchronous with parameterValueChanged, or asynchronous (on the message thread). The thread safe part is the atomic you get with getValue(), but you can do the same with getRawParameterValue(), keeping the pointer as a member for each parameter. In any case, if you have to keep related parameters consistent, you need a different solution.

Except, the search that getRawParameterValue does only has to happen once (e.g. in the constructor), so that’s not a direct comparison in terms of speed.

True. I was just pointing that a map search is what APVTS does to recall parameters by ids -that’s why you’d usually, as you say, keep the pointer to avoid the search in processBlock. I described the queue based solution in case there’s a need to keep related parameters consistent. For that case, mapping to an enum lets you use a switch in the audio thread. If you don’t need to keep related parameters consistent or have some costly computation of derived values, you don’t need listeners at all -APVTS already gives you atomic access.

Yes, that all makes sense. And I agree if you’re going to use a single listener callback for all the parameters, that using enums and a switch is the fastest way.

It also occurs to me that if you’re using parameter smoothing in the processBlock, then you don’t need listeners at all then either. You could just call smoothedValue.setTargetValue(*myAtomicPointer) at the top of the processBlock for each parameter. And then the output of the smoother would already be more “fine-grained” than whatever a parameter listener could tell you. You’d have to do your own comparisons on the output of the smoother anyways, to see if it had changed, before recalculating filter coefficients or whatever other expensive DSP stuff you might then do… so it seems the parameter listener model would win you nothing in that case.

1 Like

Yup, if you’re smoothing all parameters, you’re damned to update them in every callback. I use smoothers for just a few things (gains, basically) because of this. Summing up (I got a bit lost myself), if you’re using parameters directly, you don’t need listeners. If you’re deriving values, but each derivation depends on a single parameter, you probably need listeners, and it’s probably better to have one for each parameter. For example, computing filter coefficients for a frequency -too costly to do it on every callback, but all from a single parameter. If you’re deriving values from more than one parameter, you probably need a queue. For example, an offset that’s added to a set of values. Each final value combines two parameters, so you need to synchronize their updates.

There’s a lot of questions about this stuff pretty much every week, and many different answers -it arguably depends a lot on the use case, and Juce doesn’t offer much beyond APVTS. I settled mostly on queues after watching Dave and Fabian’s talk. Before I tried something like this, which gave me a lot of headaches.

1 Like

That seems like a handy summation of approaches to the problem, hopefully that’s a help to others trying to sort it all out.

One thing about the fifo queue route - I get that the point is to capture all the parameter changes from various threads (e.g. message thread and audio thread), and then essentially “replay” those changes in order, all on the audio thread (because you iterate through the queue in processBlock). That seems good for thread safety, provided your queue is thread safe, and good for realtime (audio), provided your queue is non-locking.

However, it doesn’t necessarily seem good for efficiency. Following the queue example you described above, if you have 10 Messages in a queue, all for IdType filterFrequency, there’s really no point in blasting through 10 of those at the top of processBlock to recalculate your filter coefficients 10 times - you’d just want to take the last filterFrequency and calculate for that. So then to work around that, you could add a more complex while loop that scans through the queue and only grabs the last value for each IdType, stores those temporarily, and only then passes those along to DSP objects.

At which point, you might as well have just used the value from getRawParameterValue, if all you want is the “most current” available value.

Ah, but you might say, the fifo queue’s real benefit is when “you’re deriving values from more than one parameter” and “Each final value combines two parameters, so you need to synchronize their updates.” But I’m trying to think of an example where having the “most current” value for two parameters, before you dive into the rest of the processBlock, isn’t sufficient synchronizing. And I can’t think of one. But maybe I’m overlooking something.

It would be a different story if we had sample-accurate automation data, because then it would matter the order that two different values changed in - because then some actual audio processing could take place between individual parameterChanged callbacks. But as it stands, when we just have to grab whatever current parameter values we can before diving into the processBlock, I don’t see the advantage.

1 Like

You made me rethink this… it’s actually not a great idea to have a single parameterChanged callback, even for a queue implementation. I don’t need a map if there are single parameter listeners. What I have now is a Messenger that takes a parameter change and pushes it to the queue, individually. This would be the outline:

class AudioProcessor : juce::AudioProcessor
{
    APVTS apvts;
    enum class MessageID { id0 /* etc */ };
    struct Message { MessageID id; float value; };
    QueueType<Message> messages;

    struct Messenger : APVTS::Listener
    {
        AudioProcessor& ap;
        const MessageID id;

        Messenger (AudioProcessor& ap, MessageID id) : ap{ ap }, id{ id } {}

        void parameterChanged (const juce::String&, float value) override
        {
            ap.messages.push ({ id, value });
        }
    };

    std::vector<Messenger> messengers;

    void addMessenger (const juce::String& stringId, MessageID id)
    {
        auto& messenger{ messengers.emplace_back (*this, id) };
        messenger.parameterChanged (stringId, *apvts.getRawParameterValue (stringId)); // if needed
        apvts.addParameterListener (stringId, &messenger);
    }

    AudioProcessor() : apvts { /* ... */ }
    {
        addMessenger ("id0", MessageID::id0); // etc
    }

    void processBlock (juce::AudioBuffer<float>&, juce::MidiBuffer&) override
    {
        Message msg;
        while (messages.pop (msg)) switch (msg.id) {
        case MessageID::id0: // ...
        // etc
        }
    }
};
1 Like

Let me give you a real life example. It’s a multiband processor. You have 8 crossovers, 9 bands. Each crossover has a frequency, a type, an on/off switch. For each crossover you compute a half omega, and a set of filter coefficients. The frequencies can’t overlap, they’re bounded by their neighbours. Each band has a threshold (and other stuff). There’s a global threshold that’s added to all thresholds. All these are taken in dB and converted to bits and linear. The global threshold can be manual or automatic. The automatic one is computed from values derived from a global ratio, range and knee. There’s also an offset for each threshold. These are computed integrating in pitch (as a discrete sum on 16384 points) the outputs of the transfer function cascade of all crossovers. So, just for this part, you have a dense web involving 8 frequencies, 8 filter types, 8 switches, 9 thresholds, a global threshold, range and knee, and a global threshold mode.

When do I compute all this? I can’t do it on every callback. I have to do it on parameter changes. Let’s assume everything is atomic. A frequency changes. I bound it to its neighbours. I compute its half omega and its filter coefficients. I compute the transfer function cascade of all crossovers. I add the offsets plus global threshold to each threshold. Nothing else changed apart from this frequency.

…or did it? Say while I was doing this, a threshold changed. I convert it to bits and linear, add its offset and the global threshold. Which result wins? The one that ended last. But it may well be the one that started first. I changed a frequency, then a threshold, then I wrote the values for the threshold change, then the same values for the frequency change. When the frequency change happened, I had an old threshold value.

I was also worried about multiple updates. That’s why I tried the shared object approach -a single object with all parameter values, that’s pointer swapped at the start of processBlock. It also needs to mark which parameters changed, or I’m back to recomputing everything on each callback. It’s a mess. Most of the time, there are no parameter changes at all, but I have to check the whole object anyway. There’s another option -a shared object, also pointer swapped, that includes only the parameters that changed. A mix of both approaches. It’s also very complicated.

For this plugin, the switch does have some complexities, only for the most costly cases that involve many parameters. Offsets are computed at the end of the loop if there was any frequency change. Automatic global threshold too, and some global timing parameters that involve a LambertW approx. If there are no messages, nothing is done at all. I tested with all parameters automating simultaneously, got ~10% of overhead, and declared myself happy :sweat_smile:

1 Like

OK, I’ll have to take another look at this later to fully understand the example you’re giving, but could you clarify one thing…

When you say “I can’t do it on every callback” here, do you mean every processBlock callback, or every parameterChanged callback?

Every processBlock callback, yup. In this plugin, I have 180 parameters, some of them very entangled, with many derived values. I have a separate namespace just for parameter classes -it’s longer than the dsp code.

1 Like

I agree that this can’t be recomputed every callback. But – with the FIFO approach, doesn’t that ensure that the computation, when it does happen, will happen on the audio thread? If I understand correctly, any thread can write to this message FIFO, but only the audio thread is reading on it and acting on it when necessary, is that correct?

What I’m getting at is that, without a FIFO, you can still easily poll all your parameter values each callback, and only do the computation if they have actually changed.

It seems like either way, the computation will be happening on the audio thread. It seems like the main advantage of the FIFO approach is not having to check all your parameters every callback to see if they’ve changed… but it does seem to me like actually reading from this FIFO won’t be free either, because even with a switch statement, if you’ve got 50+ parameters… ¯_(ツ)_/¯

This sounds to me like you have data races in your parameter cooking code, regardless of how you get those messages to that code…? Even with a FIFO, how do you ensure that this web of parameters updates correctly with so many dependencies? Do you do some kind of preprocessing on the FIFO to see what messages are in it and what their dependencies are to determine the order they should be executed…?

Basically :grin:

It’s 180 checks, plus the combined checks for multiple dependencies. The fact that

if (messages.pop (msg)) { }

nothing between the braces will be run if there are no changes was enough for me. It’s a compromise, but they’re all compromises. I have some overhead for multiple updates, in exchange for less overhead for no updates.

There are no data races if there’s a single thread involved. Each message triggers a bunch of changes. For the next message, previous changes have been done. Then I reserve some changes for the end of the loop. That’s the point of using the queue -to keep everything in sequence. There are data races if you invoke all this directly from parameterChanged. The races have nothing to do with the parameter cooking code -it’s just that parameterChanged can be called from any thread.

Btw, I’m not inventing the wheel here. This is a common approach. Thing is, as I said before, that there are many “common approaches”. No one seems to be the golden rule.

If we had sample-accurate automation, a queue would be silly. You’d just have a lot of audio lines. Because automation and UI changes are way less than sample-accurate, in the average case I have way less than 180 changes per block. So the 180 checks are unwelcome, and they show up in testing. I’m optimizing for the average use of the plugin, which involves little to no automation. Other cases would have different needs.

That makes sense. I think for my plugin’s use case, a FIFO queue is a bit overkill… but it’s very interesting to learn how others have solved this problem, thank you for your insights :slight_smile:

1 Like