This is part of my microtonalizing plugin wrapper.
I would like the user to be able to change the number of available voices, i.e. copies of the desired plugin, in real time. That way, they can add voices to cover complex passages but save CPU on other parts of a piece.
The parameters should all be the same for all instances, but different MIDI messages will be distributed to different instances. If the user adds a voice, I want to clone the main instance, including all internal parameter values. It doesn’t seem logical to keep track of all parameter values in my wrapper and then set those values for newly created instances, if it is possible to bulk copy all the data, i.e. the instance itself.
The wrapper code I started with uses ScopedPointer which I read should be replaced with std::unique_ptr, and I suppose I will want the array version of unique_ptr. I’m not finding much info, though, on copying the value of what the pointer references in C++ materials. They mention the pointer itself can’t be copied, fine; I want to create a different pointer to a different address with the same value. All of the C++ std::unique_ptr examples I can find use pointers to int or char, which seems pretty pointless, since they make it seem as simple as pie to assign a constant int value to a new instance.
Is it going to be as simple as initializing a new unique_ptr by dereferencing the value of an existing one?
I will also need to copy the entire array to a new one, in the case where someone reduces the number of voices (instances) while some of them are active, as this may involve removing inactive instances from the middle of the block of memory.
Or, for such needs, is unique_ptr not ideal?
Thanks for any tips.
Pointers and unique_ptrs don’t have anything to do with it. You will need completely independent plugin instances and you can manage those however you want. (I would myself probably use something like std::vector<std::unique_ptr<AudioPluginInstance>>.)
You can try copying the state from a plugin instance to another by calling getStateInformation on the source plugin and calling setStateInformation on the destination plugin with the data you got. How well that works of course depends on the plugins involved. One would expect it to just work, but plugins have been known to do weird things.
Linking parameters dynamically is another problem and you will just have to find out some solution. Maybe by registering your wrapper plugin as a parameter listener of one of the plugin instances and then iterating through the other duplicated plugin instances and setting the parameters.
If the plugins have often changing non-parameter data that is only available for use with getStateInformation and setStateInformation…Oh well, that’s going to be more difficult. (Juce has the AudioProcessorListener class but I think it really doesn’t work with many plugins when a non-parameter state in them changes. You can try it, though.)
I knew I should have read the documentation.
Thank you. I’m not selling it, and it doesn’t need to cover all cases. If there’s fast-changing info not covered by parameters, they can tell me so and I can look closer at a particular plug-in.
I was half planning to just obligate users to use only MIDI bindings for parameter changes, and they would be sent to all instances. I would be a little concerned that propagating changes via a listener would mean the changes would hit some notes before others, but OTOH it should cover all parameter-changing methods. Note-on, note-off and pitch bend will only be sent to certain voices, as well as poly/mono, but everything else gets sent to all of them.
This should be enough to keep me out of your hair for a while.
You will just have to test it all out, this isn’t stuff that is commonly done. (I have myself done some experiments with duplicated and linked plugin instances, but I never tested it all out thoroughly and it was with effect plugins to do fake surround sound processing with stereo only plugins, not instruments.)
I have another question regarding this plan. I didn’t find anything in a search, but maybe because it’s too obvious.
The main reason my users would be able to change the number of instances would be for minimizing CPU usage. If I want to “turn off” processing for a set of instances, is it enough to simply exclude them from the list of instances to which I forward the call to processBlock and parameter changes? I mean, if the plugin instance doesn’t receive those forwarded calls… it doesn’t use any CPU, right? But how does this work for instances that are still processing existing notes?
My thought is that, if someone changes the number of voices while some are active, I should allocate a separate block of memory, new notes should use that new block, and the open instances should be deactivated one by one as their notes are turned off, and only when all of the old notes are off, the old block should be freed.
Is there a difference between not calling processBlock and an instance not doing anything? So, for instances with unterminated notes, do you have to keep calling processBlock until the note ends?
Yes. And of course you have to take into account the end of the MIDI note isn’t necessarily the end of the audio that is going to be produced. There isn’t really much else that can be done there but some hack that measures the output level of the audio from the plugin and when it has been below some threshold for some time, you can stop doing the processBlock calls.
Well implemented instruments of course should use no or very minimal CPU even if you keep on calling processBlock without any MIDI notes to process.
However, stopping the processBlock calls isn’t necessarily going to stop the plugin entirely from using CPU. It may be doing things in the GUI thread (even if you don’t show the GUI editor explicitly), or it may be running additional threads you can’t control. Each instance will also use memory. (There’s no guarantee that for example sampler instruments will share memory for the same samples.)
I’d thought of that. Maybe in the future, users will be able to manually set the maximum release length. In the first versions, they will simply have to use enough voices to not cut off their notes, and they will have to experiment to find the best places to use the resize commands.
I guess each user will have different needs for memory vs. CPU. With this particular setup, I’m worried about the overhead of having two entire arrays of instances for a few seconds until those active notes finish. So, if the user had 10 voices and wants to reduce them to 6, they will actually have 16 for a few seconds.
So you are planning on something like dynamically creating and destroying the plugins on the fly based on the needed polyphony? That’s almost certainly not going to work like you would want. You need to have the necessary amount of plugins created beforehand, it can take a lot of time to create and destroy the plugins and you just can’t expect that to happen smoothly while the audio is playing. (Even if you do it very carefully, you would at least have to accept that there will be potentially long delays before the additional plugin instances are ready to play audio.)
The original plan was that the user would have to determine how many voices are needed for a piece, and set that up at the beginning. I later thought of making it variable, but if that’s not possible, it’s not possible.
Maybe, then, one type of message will change the size of the array, and another type of message can determine how many instances need to be running at any moment.
If these are triggered by MIDI, they need to be in processBlock. But perhaps they need to be separate threads?