Problem editing automation points in Cubase/Nuendo


I am developing my first JUCE plugin and everything appears to be working except for one thing. After recording some automation in Cubase/Nuendo , I can replay the automation and move automation points with the mouse. Everything appears to work fine, except that manually adjusting automation points via numerical input or channel slider has no effect (Cubase and Nuendo allow you the edit automation points by maually inputting a number or using a slider attached to the channel). I can understand why this happens, since normalizing/denormalizing of control values is done by the plugin and the host never sees the denormalized values.

After searching through the API, I noticed the following two methods inside of the  VST3 wrapper

        Vst::ParamValue toPlain (Vst::ParamValue v) const override       { return v; }
        Vst::ParamValue toNormalized (Vst::ParamValue v) const override  { return v; }

These are called by vsteditcontroller:
ParamValue PLUGIN_API EditController::plainParamToNormalized (ParamID tag, ParamValue plainValue);

ParamValue PLUGIN_API EditController::normalizedParamToPlain (ParamID tag, ParamValue valueNormalized);

These seem to provide the required functionality except that I see no way to access/override these methods via the public JUCE API. If I am missing something, please let me know as this is my first attempt at using JUCE to build an audio plugin.



This should be fixed in the latest tip. Can you try if the changes work for you?


I will give it a try. Thanks!


Just pulled the latest but don't see it. Could you point me to the commit in GitHub?


The probem is fixed with the following commit:

To make use of it you need to derive your parameters from AudioProcessorParameter. See, for example, the updated "audio plugin demo" which was also updated with the above commit. 



Ok, this seems to work. However, I don't why I neeed to derive all my parameters from AudioProcessorParameter. If you simply make AudioProcessor::valueFromString and AudioProcessor::stringFromValue virtual, it would allow that plugins that are doing the parameter processing inside of the AudioProcessor to simply override a couple of methods and not have to re-write all of the the parameter processing code. Plugins that derive from AudioProcessorParameter could use the method shown in the audio plugin demo.


We're pushing towards using AudioProcessorParameter objects everywhere rather than bloating the AudioProcessor base class with lots of methods that take indexes - using objects for each parameter is a much better architecture, and I wish I had designed it all like that from the very beginning!

(And actually, looking again at those AudioProcessor::valueFromString/stringFromValue methods, I'm not sure how they ended up in that class, and they don't belong there - I've removed them now)


I would guess that valueFromString/stringFromValue are in the AudioProcessor class because that's where the other parameter handling methods are. That sounds like the right place to me.

I understand your reasoning about using AudioProcessorParameter everywhere but dont' necessarily agree with your approach. Here are my reasons.

1. The AudioProcessorParameter class makes the implicit assumption that each parameter has its own self contained data store and therefore needs only to be subclassed by data type. This is a bad assumption in my case because my parameter handling code doesn't work that way. It makes calls into a member object coded from a separate, framework-agnostic audio processor class that is shared with other modules I am developing. Parameters are read and updated by accessor methods in this object. Since each parameter calls difference accessor methods, I will need a separate subclass for each parameter regardless of data type, again leading to more bloat. This is particular case where using AudioProcessorParameter everywhere does not necessarily lead to cleaner code.

2. Having to rewrite all my parameter handling will incur additional cost in terms of development and testing time. I have higher priorities than completely rewriting a bunch of parameter handling code that already works and has no maintainability problems.

3. Putting these methods in AudioProcessor provides compatibilitywith the existing interface. I'm sure many plugin developers currently use AudioProcessor to perform these tasks. (probably most of them do since this is what the demo plugin does) They should at least have the option of continuing with the same approach and should not have a rewrite arbitrarily forced upon them simply because they want to pull in a piece of missing functionality that should have already been there. Are you going to remove getParameter, setParameter, etc. as well? If so , you will force everyone to migrate to the new approach immediately, potentially disrupting their development plans. If not, you will have an inconsistent interface. A better approach would be to put a deprecation strategy in place and communicate it to your customers so they can in turn plan their own upgrade strategies. This would also allow them to avoid going down the wrong path and having to do unnecessary rewrites later on, as I have apparently done.



The old index-based methods will hopefully be deprecated eventually, but I know lots of people use them so won't be forcing people to change any time soon. Before that happens we'll be providing plenty of good reasons to switch to the parameter objects, like handy new utility classes and lambda-based tricks that can only be sensibly implemented with the object-based structure.

I'm not totally clear what you mean about your own code not suiting the new architecture, but I'm struggling to imagine any situation where it'd be more than a minor inconvenience to use parameter objects.. If I understand you correctly, you say you have to pass the callbacks to a set of your own functions that are index-based (?), but in that case, making those calls from a parameter class rather than from the processor class should actually be almost identical in terms of the amount of code involved. Worst case would be that you'd need to give each parameter object a reference to your own object, but that's only a couple of lines of code, and I think you'd find that separating the parameter handling from the rest of the processor logic will reduce coupling in your codebase and improve the overall architecture of your code. That's been my experience in switching over old projects to the new format.


But that's my point. Anyone who wants automation to work correctly in Cubase/Nuendo will be forced to change now, because otherwise the missing functionality is not going to be available. Fundamentally I see this as a bug fix, not a feature request. I see it that way because end users will (correctly) expect their automation to work correctly, and they will see it as a bug in my plugin. Other plugin developers will have the same issue unless they implement workarounds. What I'm wondering about is how other plugin developers are addressing this issue.

My functions in my object are not index based. They look something like this.

float MyAudioProcessor::getParameter (int index)
    case PARAM_BAND1:              return db2normal(m_MemberObject.GetBand1());
    case PARAM_BAND2:              return db2normal(m_MemberObject.GetBand2());
    case PARAM_BAND3:              return db2normal(m_MemberObject.GetBand3());
    case PARAM_XOVER1:             return freq2normal(m_MemberObject.GetXover1());
    case PARAM_XOVER2:             return freq2normal(m_MemberObject.GetXover2());


Setters work the same way.

void MyAudioProcessor::setParameter (int index, float newValue)
   switch (index)
    case PARAM_BAND1:              m_MemberObject.SetBand1(normal2db(newValue));    break;
    case PARAM_BAND2:              m_MemberObject.SetBand2(normal2db(newValue));    break; 
    case PARAM_BAND3:              m_MemberObject.SetBand3(normal2db(newValue));    break;
    case PARAM_XOVER1:             m_MemberObject.SetXover1(normal2freq(newValue)); break;
    case PARAM_XOVER2:             m_MemberObject.SetXover2(normal2freq(newValue)); break;

This is just an example. There are quite a few other parameters as well. Just passing a reference to an object will not work because I have to call a different accessor for each parameter. With the new architecture I will need to create a new subclass for each parameter and override getParameter, setParameter, etc. for each one, with each calling the correct accessor function.

I could potentially pass a pointer to the object along with a pointer to the member function and use some rather nasty syntax to dereference them, but with all that indirection I would consider it a lot less maintainable than the fairly straightforward code above. There are several other ways to get it done but they all involved helper classes or some other syntactic contrivances and are all not nearly as clean and maintainable as just using switch statements.

I agree that using the AudioProcessorParameter class will reduce coupling and is theoretically "better" from an abstract OO perspective. However, I must also consider the cost tradeoffs involved in rewriting things that are not broken and are currently not creating a problem for me. For me, this rewrite is solving a non-existent problem.

If you are concerned about cluttering the AudioProcessor base class, why not create a separate class for controlling the parameters, something like "AudioProcessorParameterController"? You could simply move all the parameter control methods there and keep them virtual. This would be a much simpler migration for developers who currently use the "index" method (no rewrite necessary, just moving a few functions) and developers who want to use the AudioProcessorParameter class would have that option as well.


You seem to be misunderstanding.. Like I tried to explain earlier, each AudioProcessorParameter object knows its own index, so you can move all those exact same switch statements into a simple parameter class, the only difference being that you'd call getParameterIndex() rather than getting the index from a function argument. All the code inside your switch blocks would probably remain identical if your parameter class contains this m_memberObject reference.

Re: AudioProcessorParameterController, I don't like the idea, but it doesn't need to be part of JUCE - you could implement that in your own code! A trivial AudioProcessorParameter class that takes a reference to your own AudioProcessorParameterController and forwards these function calls to it would do exactly the same thing.

And yes, I am deliberately using things like this to entice people over to the new AudioProcesserParameter stuff, because I don't want to have to support the old methods forever!


Okay, my mistake. I didn't realize that the indexes were still available. In that case, AudioProcessorParameterController is not needed because that's essentially what AudioProcessorParameter is. This will make things a lot simpler.

I would still recommend communicating your roadmap/strategy a little more explicitly though, like marking the parameter functions in AudioProcessor as deprecated in the documentation. That way new users like me will be able to avoid going down the wrong path. When I looked at the demo plugin example, the parameters were being handled by AudioProcessor so I logically concluded that this was the recommended way of doing things.



Good point about marking them as deprecated - I've added some better comments about that now.