Normalizable Range

Can anyone give me an overview of what NormalizableRange is all about? I am trying to set up an AudioParameterFloat parameter for my plugin so that I can at least display a value outside the range [0,1]. For example, a percent reverb parameter in the range [0,100] or a room area parameter in the range [1000,10000] (sq. ft.). I have constructed my plugin by starting with the NoiseGate example, which has two parameters in the range [0,1] and uses the GenericEditor, and modifying it to support my five parameters and use my own brand of signal processing. Works fine if all parameters range from 0 to 1. I only modified the processor code, using the editor code as is. But when I tried to change my percent reverb parameter from [0,1] with initial value 0.5 to [0,100] with initial value 50, it behaved in a way I could not understand. Here is a sample of my code:

NormalisableRange pctRevRange(0.0, 100.0);
addParameter(percentReverb = new AudioParameterFloat(“percentReverb”, “Percent Reverb”, pctRevRange, 50.0));

OK. I think I understand the NormalisableRange class now and can use it to go back and forth between a [0,1] range and an [A,B] range. My remaining problem is that I cannot see what is happening between my slider values and the corresponding parameter values. I assume that the slider value is unnormalised and needs to be normalized in the editor, but I am not sure how or where to do this.

Unfortunately dealing with the parameter ranges (and skews) and sliders tends to be a bit tricky, but I am sure you will eventually find out some solution that works for you.

I now have further insight into what is going on. The (AudioParameterFloat) parameter methods getValue and setValue (and one or two other methods) apply conversions to and from the [0,1] domain. This is only apparent if you look at the code for these methods. I tried, without success, to undo the conversions in my code so that “percent reverb” came back to my processor in the range [0,100]. What finally worked was to change the JUCE code for getValue and setValue to eliminate the conversions. I guess the idea behind the audio parameters and normalization is to make sure that sliders are always in the range [0,1] even if the parameters they control are not. This is more or less the opposite of what I want to do.

Aha!! I tried an approach based on the AudioProcessorValueTreeState tutorial and it works just fine without the need to hack any JUCE code. My five sliders all display the ranges I like and the parameters appear to arrive in the processor intact. No need for slider listeners or worrying about what to do when the slider value changes (or is in the process of changing). I’ve learned a lot from this exercise. Next project: make the GUI look sexier!

3 Likes

Hi there, can you send the link to the tutorial you watched. I am in the same situation. Thanks

This is the latest version of the tutorial: https://docs.juce.com/master/tutorial_audio_processor_value_tree_state.html

Thanks, I thought it was some other tutorial. I tried this one without success. Can you point me in the right direction?. I have some like this
gainSlider.setRange(0.0, 1.0);
and
inputGainParam = std::make_unique<AudioParameterFloat>(GAIN_ID, GAIN_NAME, NormalisableRange< float >(0.0f, 2.0f), 0.5f);
My output in the parameter is just from 0.0 to 1.0 as the gain range. I want it to be mapped to the range 0.0 to 2.0.
Thanks for your help

Here is some code from one of my plug-ins:

Multiverb2019AudioProcessor::Multiverb2019AudioProcessor()
	: parameters(*this, nullptr, Identifier("CompTreeState"),
		{
			std::make_unique<AudioParameterFloat>("absCoef", // parameter ID
												  "Absorption Coefficient", // parameter name
												  NormalisableRange<float>(0.0f,1.0f,0.01),
												  0.3f), // default
			std::make_unique<AudioParameterFloat>("roomArea", // parameter ID
												  "Room Area", // parameter name
												  NormalisableRange<float>(100.0f,1000.0f,10.0f),
												  300.0f), // default
			std::make_unique<AudioParameterFloat>("pctRev", // parameter ID
												  "Percent Reverb", // parameter name
												  NormalisableRange<float>(0.0f,100.0f,1.0f),
												  50.0f), // default
			std::make_unique<AudioParameterFloat>("level", // parameter ID
												  "Level", // parameter name
												  NormalisableRange<float>(-30.0f,5.0f,0.1),
												  0.0f) // default
		})
{
	absCoefParam = parameters.getRawParameterValue("absCoef");
	roomAreaParam = parameters.getRawParameterValue("roomArea");
	pctRevParam = parameters.getRawParameterValue("pctRev");
	levelParam = parameters.getRawParameterValue("level");

	absCoefOld = *absCoefParam;
	roomAreaOld = *roomAreaParam;

	startTimer(100);
}

In addition, the following line is in the header file:

AudioProcessorValueTreeState parameters;

There is some code in the editor as well. I hope this helps!

you should check out josh’s video on the valueTreeState object. it’s very recent as you can see so it’s pretty much what is being done nowadays. https://www.youtube.com/watch?v=HrRghlZHJvE&t=2s

the only thing missing in his video is the xml stuff and the stuff with the atomic< float >* but you already have that from the text tutorial

I also recommend Josh, aka The Audio Programmer. I’ve viewed many of his presentations.

You can check out this full project for giggles.

Was talked about on Discord as he(the dev) was making it with @daniel & @Xenakios, shows all systems together.