It seems that, from my basic understanding of the audio graph code, there is only one channel available for MIDI routing. This sole channel seems to behave in the form of “MIDI omni”: where all MIDI input sources and channels tie together as one.
(See: const int AudioProcessorGraph::midiChannelIndex = 0x1000; in juce_AudioProcessorGraph.cpp)
Is it possible to route separate MIDI channels to separate VSTs? As in, MIDI instrument A, channel 01 goes to plugin A, while MIDI instrument A, channel 02 goes to plugin B, and so forth.
Similarly, is it possible to route data from separate MIDI input devices to various plugin instances, in whatever channel configuration? As in, MIDI Instrument A goes to plugin A, while MIDI Instrument B goes to plugin B, and so forth.
The attachment below demonstrates the present “MIDI omni” scenario, where 2 separateMidi Input “filters” are created, although the same MIDI data is being sent to both VSTs loaded.
Well, you’re right that there’s only one midi channel, but that channel can be sent to many destinations, so if you had a processor that could extract or change the midi channels within it, you could do all kinds of clever stuff with it.
I understand that I can split that single available channel to the 16 MIDI channels, but that doesn’t solve being able to connect multiple synthesizers/controllers/MIDI interfaces and have them appear as different MIDI input plugin instances (1 per device) so as to control things independently from one another… (And not amalgamate all their MIDI data into this one single channel - AudioProcessorGraph::midiChannelIndex)
Also, not all synthesizers/controllers give control over which MIDI channel I can use for basis to send data to a PC (or what-have-you).
Might I suggest giving each synthesizer/controller/MIDI interface its own “midiChannelIndex” and have dedicated per-device MIDI buffers? I’m not sure how to achieve 1 Midi Input plugin instance per device, though.
Has anyone solved this with a source code sample they’d be willing to share?..or…is this being addressed directly in the JUCE toolkit in a newer version?
1.) I’m trying to implement a simple live host (MIDI device -> VSTi -> VST FX -> Audio Output) where two identical (Arturia) controllers are feeding a separate VSTi + FX. I solved the duplicate MIDI device driver naming issue by modifying all JUCE source adding / dealing with a numeric device ID suffix if you will. Obviously not ideal for some users where device order matter but for me it works. I am having trouble (in general due to being a less than capable C++ programmer) using two MIDI input devices with a single Audio Processor Graph audio output.
2.) Roland’s A-PRO controllers have a unique driver and dedicated buttons that allow you to swap “device output” between two different MIDI device driver ID’s at the touch of a controller button allowing very easy live switching between “synth racks”. I would like know how to get Roland A PRO Device A feeding VSTi + FX A and A PRO Device B feeding VSTi + FX B and have them share a common audio output.
I’m not yet sure if this can be done with AudioProcessorGraph as it stands as I’m still learning JUCE (every day as much as time / coffee permits) or if I need to somehow write my own output mixer to combine two separate objects.
Thoughts, ideas, and suggestions are very welcome. Don’t tell my girlfriend but I LOVE JUCE!
This post is just silly as I just didn’t ‘get’ how it worked at the time.
The answers are actually pretty simple. If you want to output MIDI to a device from within the graph, you can create a tiny processor wrapping every MIDI device you care about, where each processor outputs the MidiBuffer from processBlock() to your desired device. There are certainly other ways you can do this. And definitely different ways of feeding MIDI into the graph.
Have a look at the demo and various examples in the repository, particularly the plugin host.
Thanks for the reply. I’m trying to grab input from multiple MIDI devices and process each device separately. I was currently using / attempting two instances of AudioDeviceManager, MidiKeyboardState, and MidiKeyboardComponent, etc. I’ve looked extensively at the host sample. Perhaps I’m missing something or simply need more time to digest how it all works. I will continue to study it.
As this relates to the host sample, If I understand you correctly, are you saying that the “correct” way, or at least one way, would be to have a single AudioGraphManager with multiple MIDI input filter instances and then, separately for each filter, select only the messages related to the device I’m interested in?
I apologize in advance if this is simple for “real” developers (which I am not) and thank you in advance for any help you can provide.
That actually makes no sense. The AudioDeviceManager is meant to control the state of the “one” currently selected audio interface. This is completely different from your midi interface, even though it might come in the same hardware.
Instead have a look at the MidiInput class. You can have several of them open at the same time.
I haven’t done that yet, but I think you can have two separate MidiMessageCollector instances, one for each MidiInput and feed it into two different processors. You only have to implement a specialised AudioIODeviceCallback, which calls two different processors and mix them.
I would have a look into the source code of AudioProcessorPlayer and enhance that.
Thank you for the reply. I’m less than a month into JUCE and only on a very limited schedule at best. I will dig into the MidiInput class and AudioProcessorPlayer. There are so many experiments to run (at my level) just so I can learn and simply not enough free hours in the day. I’m very much looking forward to what I can piece together with JUCE in the coming year. The community here is very professional and friendly which is welcoming. Wish you the best.