To implement an AU or AAX properly, communication between the filter and the UI shouldn't be done with a pointer or they won't work on in Logic node or on the Protools desks, i.e. anytime when the UI is run on a different computer to the filter. It should be done with properties and parameters. Does anything exist in Juce like this already, or will I have to implement it myself? I saw a post relating to this from 2007, but I was wondering if anything has changed since then.
No - sorry, the plugin code was all written to be lowest-common-denominator, so doesn't use that format-specific mechanism. To figure out some kind of abstract layer that'd wrap the AAX GUI-to-processor communication system but which would also work with VSTs + AUs would be an interesting and worthwhile task, but I don't have the resources to look at doing it at the moment.
at least it would be great, if we have methods/callbacks to send MemoryBlocks from GUI->Processor / Processor to GUI, and something like a valuetree- synchronizer between processor and GUI ;)
Yes indeed, and this is all stuff that is on my mind at the moment, as this year we'll be working on splitting tracktion's UI and engine into two processes (either on the same machine or across a network), which is basically the same problem. As part of that I'll certainly be building a remote ValueTree syncing mechanism, and probably some kind of system for streaming data between processes for audio, etc. Hopefully the same thing can be used for this.
Five months later, can you tell whether a ValueTree syncing mechanism is going to be added to JUCE? It would be awesome for plugin processor-editor communication and I ponder writing a mechanism myself unless it's going to be added to JUCE anyway. It could take care of some loose ends of the Audio Units / VST3 and AAX wrappers that currently pass (editor/processor) pointers around. In my opinion adapting a synching mechanism to the plugin standards wouldn't be too hard if it's based on passing messages around (that could also be sent over networks).
In fact I gave the matter much thought already. Basically such a system could be implemented as ValueTree listeners or something similar to the UndoManager with Actions that would be sent around. However I see one major hurdle... A system like that would need to be able to identify a node remotely without pointers. The ways to do it I can think of are some kind of unique path per ValueTree node or assigning unique IDs to ValueTree nodes. Of course the problem of change-actions contradicting each other would remain, but at least in my case that wouldn't be too troublesome as I do not plan to allow changing the same modes from multiple sources.
So i just wonder - did you work on this problem or did you decide it's not the way to go for Tracktion? Is JUCE going to be able to do this by itself? How do you plan to implement the syncing?
Sorry - we had a bit of a rethink about tracktion and changed our plans about that bit.. We'll probably return to it and do it this way eventually, but not until next year.
My thoughts were essentially the same as yours though - it just needs the same set of manipulation commands as it uses for undoable actions, so that'd be the way to do it. But we hadn't got as far as digging more deeply into it, I'm afraid!
I've been thinking about this today. Just in the middle of moving to a ValueTree for almost everything.
So, here's a sketch to consider. It just tackles getting parameter data into the processor, though could be adapted for other types of data. All feedback welcome. It's untested, and probably won't even compile yet. But hopefully will communicate the idea...
I might have a think about a more general value tree synchroniser. It'd be useful.
This one just deals with specialist parameter objects on the audio side (which handle interpolation, modulation and all sorts). On the GUI side there'll be a companion object which will register with this one which will handle updates to and from the ValueTree in a straightforward fashion.
LockfreeCallQueue handles the comms between the threads.
/** Contains plugin parameters.
Functions that should be called on the GUI/Message Thread are
marked "gui*" and those that should be called on the audio
thread "audio*"
*/
template <int paramCount>
class PluginParameterDatabase {
typedef std::function<void(int,float)> GuiCallbackFunctionType;
PluginParameterDatabase() {
for (int i = 0; i<paramCount; ++i)
clone[i].setSource(&data[i]);
callGui = false;
}
/** Call when the user changes something on the GUI. This puts
a notification of the change into a queue which will be
updated at a nice safe moment on the audio thread when you
call audioSynchronizeWithGui() */
void guiSetParameter(int parameterNumber, float value) {
changeQueueFromGui.callf(std::bind(&PluginParameterDatabase::setParameterNow, this,
parameterNumber, value));
}
/** Call when the host sends a new value for the parameter. */
void audioSetParameter(int parameterNumber, float value) {
setParameterNow(parameterNumber, value);
/* And now tell the GUI. */
changeQueueFromAudio.callf(guiCallback, guiCallbackObjectPtr, parameterNumber, value);
}
void audioSynchronizeWithGui() {
changeQueueFromGui.synchronize(); /* Call the shit in the queue. */
}
/** Set up the GUI callback function for change notification.
Unfortunately we need to tell the Audio Thread about this
so it's a little more complicated than it ought to be.
It puts the request to set the gui callback function into
the queue for the audio thread to set-up. */
void guiSetGuiCallback(GuiCallbackFunctionType f, void * objectPtr) {
changeQueueFromGui.callf(std::bind(&PluginParameterDatabase::audioSetGuiCallback, this, f, objectPtr));
}
void guiSynchronizeWithAudioThread() {
changeQueueFromAudio.synchronize();
}
void guiRemoveGuiCallback() {
/* Might as well stop putting things in the
the queue when the GUI gets deleted.
And better clear the queue too! */
#warning Shit! Where do we clear the queue to avoid some nasty race condition. Might need a new clear function on the FIFO.
changeQueueFromGui.callf(std::bind(&PluginParameterDatabase::audioCancelCallback, this));
}
/** Call when the host requests the value of a parameter.
Note: when you want the value of a parameter for processing
audio you should be using the values from your voice's instance
of PluginParameterClone which will have any needed modulation
and interpolation applied.
*/
void audioGetParameter(int parameterNumber) {
return clone[parameterNumber].getRaw(); /* return the unmodulated, uninterpolated value. */
}
/** Get total parameter count. */
int hostGetParameterCount();
private:
void setParameterNow(int parameterNumber, float value) {
/* This set will use interpolation. */
clone[parameterNumber].set(value);
}
void audioSetGuiCallback(GuiCallbackFunctionType f, void * objectPtr) {
callGui = true;
guiCallback = f;
guiCallbackObjectPtr = objectPtr;
}
void audioCancelCallback() { callGui = false; }
bool callGui;
GuiCallbackFunctionType guiCallback;
void * guiCallbackObjectPtr;
LockFreeCallQueue<2048> changeQueueFromGui;
LockFreeCallQueue<2048> changeQueueFromAudio;
PluginParameterClone<paramCount> clone;
std::vector<float> data(paramCount);
};