Audio plugin idea - is this possible with JUCE?

Hello! New JUCE user. I have a plugin I want to create, and from my current exploring it seems like it’s prooooobably possible to make, albeit complex. Looking for insights here on whether y’all think it’s possible, and if you’re feeling really generous, advice on how to go about it. Obviously please tell me if it’s possible but a Very Bad Idea, OR it’s possible and already exists in some form. For context, I’m an experienced professional developer, but brand new to JUCE and fairly new to audio engineering. Thanks!

  • Per-Note Panning: each new MIDI note that occurs at some time delta from the last note will generate audio panned at a different point in the stereo field.
  • I want to enable this via plugin embedding – so my plugin would take some other VST instrument, route tagged MIDI data, and then receive back the generated audio with the tagging metadata still somehow attached to it. I don’t know if this part is possible, since JUCE processes audio in blocks that are just lumps of audio. Maybe I can wrap the processing block in some higher-level class that manages metadata, but that feels like it’ll incur some nasty latency.
  • So the plugin would enable a user to take any synth, and have all notes (with some knobs to control how it works) panned to different locations in the stereo field. This would create a sort of particle effect, like Native Instrument’s Noire (or I hear per-note panning is possible for some FL Studio instruments). The big advantage and hope here would be that this plugin would be plugin- / DAW-agnostic – take your dead-center synth, pop it into my plugin, and now you have fun randomized stereo audio spinning around your head.
  • To be clear, this isn’t just an Auto Pan - those tools pan the whole track, so if note A sounds at 30L, and note B sounds at 30R, whatever audio remains from note A will get whipped over to 30R from 30L when note B sounds. I want all of note A’s audio to stay at 30L, and note B sounds at 30R independently.

I’ve tried to make this with stock Ableton tooling, and it’s possible but clunky and verbose. I know the MIDI spec has CC#10 for panning, but that’s whole-track panning. I could of course just use multiple tracks, but then we have to split notes across MIDI clips. I could use multiple chains on one track (so one MIDI clip), but then each chain has its own (probably duplicated) instrument and parameters (this is my current solution though).

So in short, I want: one track, one chain, one clip, any synth, each note is panned and placed independently from all others.

Thanks so much!

This is the only way to do it with arbitrary 3rd party synth plugins, you can’t “tap” into the voice generation/output mixing code from the host code.

Thanks for the response! So even if my plugin hosts the 3rd-party synth within itself, the same way the AudioPluginHost app does - still a no? I was hoping that would let me get access to inputs and outputs, with my plugin as an intelligent wrapper and manager.

You can’t avoid the duplicated plugin instances issue by making your own host/wrapper, that’s what I meant.

Okay, thanks! So then would it still be possible to create a plugin such that the host manages any 3rd-party instance duplication (e.g. updating of one “master” copy, then propagating said updates to the duplicates)) internally, so that the user doesn’t have to worry about it (even though they’ll still be paying the duplication toll in CPU)? I guess that would mean more just routing individual MIDI inputs to each 3rd party instance via some parameterized algorithm? If so that’s simpler in many ways.

No, you can’t tell a 3rd party host from a plugin to do some kind of duplication scheme for a 3rd party plugin. You’d need to do the hosting part yourself too. (It is possible to do a plugin that hosts other plugins.)

That was my original hope, a plugin that hosts plugins, like AudioPluginHost - apologies if my terminology (slash just base knowledge :sweat_smile:) is loose here!

You can make a host and wrap the other plugin and have access to its audio for further processing. But the effect you describe would seem to involve passing each midi note to the wrapped plugin, and then creating a separate audio buffer for each note that is output by the wrapped plugin, and maintaining those multiple audio buffers over the length of the note. Pretty complicated…

As above you could host multiple instances but there could still be some surprising behaviours. For example if a synth is monophonic you would have effectively made it polyphonic. Basically each instance wouldn’t know about the other notes being played by the other instances. Also if the synth has it’s own effects such as a reverb, the reverb itself will be panned which may be undesirable.

Thinking about the note thing more maybe you can have every instance play every note but you would supply 0 velocity notes to the instances that shouldn’t play certain notes. That might help.

Yeap, that’s pretty much what I was imagining, and yeap, it did sound real complicated. :grinning: I’ll take “complicated”, just looking to avoid “bad idea”! I feel like “MIDI note by MIDI note, each with its own buffer” processing is where latency might truly become horrid, but maybe I’m underestimating the speed of all this.

Totally, panning reverbs or other effects per note could be real funky, but that perhaps feels more like “user-beware” territory to me. I’m intrigued by the idea of sending all notes with all but one zeroed out, feels conceptually clean.

No, maybe I’m wrong… thinking about it more, I don’t see how you would differentiate in the audio output of a single instance between one note and another overlapping note - so I guess it would require multiple instances, one for each note of polyphony that you want to maintain…

Yea, I think you’re right. Polyphony would definitely be harder and require multiple instances - for monophonic stuff it’d be easier and you’d only need one instance (I think). I guess synth output from multiple overlapping MIDI inputs can’t be disambiguated from one big lump of audio at that point.

Unless the synth itself had some notion of channels / audio buses, and I guess that would have to be part of the VST3 spec (which hey, maybe it is), which then I’d be able to tap into? Like, the synth lands each MIDI note’s resulting audio on a separate channel or bus, which could be received by the host as a separate buffer? I’m totally spitballing now. Multiple instances, internally managed, is probably the better way for polyphony.

EDIT: Thank you all for your help, also! Super helpful.

Incidentally, I develop a phrase generator that is capable of sending MIDI panning messages with each note. I will mention that while most (external hardware) synths respond to panning on a “channel” basis, there were some that responded on a per note basis (the Roland Sound Canvas being one), where you could send a MIDI CC#10 right before each note and each note would pan to a different position independently. It wasn’t as spectacular sounding as you might imagine… although you could do things like generate a chord of 4 notes at the same time, with a pan message between each note, and spread the 4 notes across the stereo field…

Anyway, I gave up on supporting that sort of thing (individual pan messages between each note in a simultaneously sounding chord) since so few synths actually were able to respond to it. Now I just assume it’s channel panning…

1 Like

Ah, very cool! Yea, CC#10 is the one, my mistake. Is your project open-source?

I feel like as an effect, “exploding” a chord into the stereo field is really cool - use cases mostly feel limited to “soundscape-y” things as I think of it, but it’s a creative effect and so who knows!

nope, sorry… :person_shrugging:

1 Like

I think you’ll need to first experiment with which synths allow you to have per voice note panning. Honestly – if you can figure out the synths that can do then it’s as simple as connecting a midi triggered random to the pan parameter.

The plugin sounds cool but it very much is a feature of synthesizer and not a plugin in itself in my opinion.

2 Likes

I didn’t mention this before because you mentioned you want this working for any synth, but there are now some synths around that support MPE (MIDI Polyphonic Expession). MPE repurposes MIDI channels to allow up to 15 voices with their own pitch, pan etc expressions. And not just static values at note starts, but even dynamically during the note.

Illustrated with Reaper’s MIDI event list and the Surge XT synth :

2 Likes

Oh, very interesting. I probably agree that my plugin is a bit of a hack / solving the problem at the wrong layer. Well…without broader MPE support, or per-note CC#10 honoring, my plugin might do in a pinch. Plus, it’s just an interesting / challenging side project for me. :grinning: But the “right” solution would seem to be for MPE to be widely supported and mappable by synth makers, which is…a big ask.

I’ve just been playing with MPE mappings in Ableton’s Meld (NotePB → Pan, I tried using Slide and it wasn’t a bipolar control even with MPE Control with “Centered” checked) and it’s pretty close to exactly what I’d want if I were going for micro control on panning per note - big win for sure. MPE Pan → Random is pretty great; all I’d want above and beyond this would be high-level “algorithms” – “alternate pan back and forth”, “place each note incrementally through the stereo field e.g. -50, -40, -30 … 30, 40, 50”.

But even if it’s a hack, a general-purpose solution would be nice. We’ll see how this goes for me.

FYI: MIDI 2.0 also supports per-note panning.

and SynthEdit supports both MPE, and also MIDI 2.0 per-note panning on macOS and with VST3 (via translation from VST3s note-expression feature).

1 Like