PT automation dialog implementation

I’ve been wondering why the following AudioProcessorEditor method takes a component as input?

int getControlParameterIndex(Component&);

Wouldn’t it be much more straight forward if we’d just get a point representing the mouse position?

The problem with the current approach is that:

  1. Some components could well control two or more parameters at once. Say, YX sliders, 3D viewport for cam perspective and several more.

  2. Not all parameters are bound to components. Imagine a complex WYSIWYG frequency magnitude plot controlling two dozen parameters but drawn in a “raw” manner, without any components.

  3. We run into weird problems with a complex UI made of several of re-positionable, overlapping layers. The internal JUCE mechanism then seems to fail for edge cases. A simple pixel position would give back more reasonable control.

Working around this stuff is hell. :wink:

I really wished we had an alternative hook passing the mouse position rather than a component.

Why not use Component::getMouseXYRelative()?
Passing a component allows to dynamic_cast to some class where the index can be stored directly.

Because you’ll get many calls, not just one per click. That’s pretty messy imho.

Further, I do not understand why components should represent parameters. Or why each parameter needs its own component. That’s straight unreasonable in many scenarios.

Let’s take a modern equalizer’s frequency magnitude display. Why does Juce assume that there will be a component somewhere for each and every parameter? Such complex views typically have to be drawn in a rather flat and “centralistic” manner to render fast enough, so you specifically do not build huge hierarchies of components.

IMHO, the true logical operation is “give me a click and I’ll give you a parameter index”.
Not “give me a component and I’ll give you a parameter”.

This misunderstanding introduces problems without benefit. If you like representing parameters 1:1 via components, it’s easy to associate them with an array or similar. But now it’s needlessly inflexible. There’s no need to interfere with the way we draw things.

I was rather thinking about normal sliders but you’re right, for something like EQ handles this isn’t necessarily ideal.
We implemented this on our own and pass the screen coordinates. Also, we only intercept mouseDown and didn’t yet experience any problems by ignoring mouseDrag and mouseUp.

I’m not sure how messy using getMouseXYRelative really is. You’ll probably need to make some adjustments for relative coordinates in any case and could in this case check whether the event component is - for example - the EQ graph.

Thank you, that sounds good. I’ll see if I can handle this on my own at wrapper level.