getRawParameterValue() vs getParameter()

In this commit, @tpoole made the audio parameter classes thread safe by backing with a std::atomic<float> rather than a float.

However, the documentation for AudioProcessorValueTreeState still states:

    /** Returns a parameter by its ID string. */
    RangedAudioParameter* getParameter (StringRef parameterID) const noexcept;

    /** Returns a pointer to a floating point representation of a particular parameter which a realtime process can read to find out its current value.
    std::atomic<float>* getRawParameterValue (StringRef parameterID) const noexcept;

Digging into the implementation, they both call getParameterAdapter (paramID). Then the “Raw” version returns a pointer to a cached std::atomic<float> and the non-raw version returns a pointer to the underlying RangedAudioParameter

It seems to me that now that the RangedAudioParameter classes are backed by an atomic, calling getParameter() from a realtime process should also be safe.

Can anyone confirm?

Assuming I’m correct, is there any reason to prefer getRawParameterValue() besides avoiding the overhead of converting between normalised / non-normalised values?

1 Like

You are correct, and it’s best to avoid getRawParameter().
It is not descriptive, and moving forward using the actual parameter brings more options to the table, e.g. if one day sample accurate automation will be implemented, I can imagine a
parameter.getValueAtSample (x) or similar, like an
applyParameterCurve (buffer, channel, factor=1.0f)… a lot can be imagined.

1 Like

Thanks for confirming @Daniel. Yes, those additional possibilities do sound very appealing!

Actually… I’ve just looked into this a bit more deeply, and I think neither getParameter() nor getRawParameter() is safe for read from a different thread.

An obvious initial concern is that the ParameterAdapter which wraps the parameter values are stored in a non-const std::map. This could be modified during a call to getParameterAdapter() by a concurrent call to createAndAddParameter() thereby invalidating the iterator used by std::map::find(). I think this could potentially be avoided by reserving memory for the std::map in advance.

Besides that the AudioProcessorValueTreeState could get deleted after a call to getParameter() and then caller is left with a dangling pointer. I haven’t thought this through fully, but wouldn’t it be safer for getParameter() to return a std::weak_ptr<RangedAudioParameter> ?

Or am I missing something?

The AudioValueTreeState is not meant to be recreated for the lifetime of the AudioProcessor.
Doing that will result in more problems than only the getParameter().

By using the AudioProcessorValueTreeState as a member of the AudioProcessor this is guaranteed. This is why it is discouraged to use it in a unique_ptr (or worse raw pointer).

I wasn’t talking about recreating the APVTS, just concurrent access to the RangedAudioParameter* during or after the APVTS destructor is called. But this is actually pretty hard to achieve in sensible usage.

Regarding my concern about the std::map::iterator being invalidated, this is actually impossible in debug builds because an assertion will fire if an attempt is made to add parameters after the underlying ValueTree has been set. After the parameters are created, there is no other way to modify the map.

In short, I think you’re right, so long as the APVTS class is used as intended, getParameter() seems safe. Which is a relief!

Just came across this thread, and it’s interesting to consider making that change in my own code now.

Obviously this isn’t the only consideration, but at first blush, it seems that code would be less easy to read when using getParameter. If I want the “unit” value for a float parameter (i.e., its non-normalized value), I can get it this way:

freqParameter = apvts.getRawParameterValue ("freq");  // in constructor

*freqParameter  // anywhere in processBlock code

But to get that non-normalized float value using getParameter, requires doing this:

static_cast<AudioParameterFloat*>(apvts.getParameter("freq"))->get()  // anywhere in processBlock code

Or this:

apvts.getParameter("freq")->convertFrom0to1 (apvts.getParameter("freq")->getValue())

(Yes you can do this, but it returns the normalized (0-1) value:)

1 Like

I just stumbled across the same thing - is it safe to call getValue() from the realtime thread? Why isn’t there a getRawNormalisedParameterValue() function for example? I’m a bit confused if it’s better to use getRawParameterValue() and normalise it if needed or if it’s ok to call getValue(). For modulation the normalised values are easier to deal with but at some point I need to denormalise. What’s the best practice to do so?

EDIT: I just checked AudioParameterFloat and it seems getValue() calls convertTo0to1 anyway. so am I right to assume that trying to access the normalised values introduces some overhead? then I’ll rather do modulation on denormalised values.

I wish there were a stronger established vocabulary around what to call the various forms of parameter values. Here are the terms that I’ve been using:

normalized: 0-1
unit: the value as whatever unit AudioProcessorValueTreeState stores it in (e.g. dB, Hz)
process: the value as used within the DSP process, e.g. in processBlock

For modulation, yes, the process value could often be the same as the normalized value. For a gain level in dB, however, the unit value would be a value in a dB range (e.g. -12 to 12), but the process value would need to be the amplitude (~ 0.25 to 4). Knowing the normalized value of a gain level doesn’t win you anything in the context of processing code.

Point being, as a general solution, I’d say you’re better off using getRawParameterValue to get the unit value, and then doing your conversion to process values as needed, rather than always returning normalized values from the APVTS.

It is -the value is stored in an atomic. There’s an atomic in the Parameter classes, and another atomic in APVTS::ParameterAdapter, both holding denormalised values. Not that it’s not confusing anyway :sweat_smile:

(edit) I forgot that was the whole point of the thread, sorry. I think ParameterAdapter is a strange beast, but it’s made strange by Parameters refusing to expose their denormalised value even when they store it. This seems to “force” ParameterAdapter to have its own copy and its own round of conversions -like, a parameter changes and it has to be normalised and denormalised before being stored in the adapter, synchronously. Unless there’s some other reason I’m not seeing, it looks quite redundant, but it’s unavoidable without exposing the raw values in the Parameters themselves.

It does, because let’s say I want to randomise that gain parameter I would add a fullrange (-1,1) random value (and limit the result to 0,1 range afterwards). Same goes for a sine modulation, just add and limit. This way you never need to think about the range. After modulation you denormalise and proceed as usual. In my process I would do the db to gain conversion, and that’s it. Does that make sense?

Yes, it does seem redundant, why not just add a getter function to the parameter classes then? Or make the ParameterAdapter a friend class? It seems so confusing that it might be better to roll your own parameter class and inherit from RangedParameter to make it work with AudioProcessorValueTreeState…

Well, it makes sense now that there’s an atomic in them, which is recent. I think the usage of the Parameter classes has shifted. IIRC they predate APVTS, and they were intended as an interface to the host and that’s it. Each of the final subclasses (APFloat/Bool/Choice) have their getters to the effective value, it’s RangedAudioParameter which doesn’t. Given that all the subclasses have an atomic float (and a const float default), these could be moved to the base, give it a getter and simplify ParameterAdapter. Unless I’m missing something of course.


Yes, I see - I guess I overlooked that case, where having normalized values is helpful in a modulation scheme. LFOs and all inputs can be ranged [-1, 1], and anything can modulate anything else – and as you say, you never need to think about the range.

So in that case, yes I guess you need to call convertTo0to1 to normalize, apply your modulations, and then call convertFrom0to1 to denormalize that back to the unit value.

Now I understand the problem better, thanks @kamedin! I’ll work with this for now, would be interesting in which direction JUCE develops the parameter management though…