I’ve been wondering why the following AudioProcessorEditor method takes a component as input?
Wouldn’t it be much more straight forward if we’d just get a point representing the mouse position?
The problem with the current approach is that:
Some components could well control two or more parameters at once. Say, YX sliders, 3D viewport for cam perspective and several more.
Not all parameters are bound to components. Imagine a complex WYSIWYG frequency magnitude plot controlling two dozen parameters but drawn in a “raw” manner, without any components.
We run into weird problems with a complex UI made of several of re-positionable, overlapping layers. The internal JUCE mechanism then seems to fail for edge cases. A simple pixel position would give back more reasonable control.
Working around this stuff is hell.
I really wished we had an alternative hook passing the mouse position rather than a component.