Hey all,
First time poster here, so let me first just say that JUCE has been massively helpful for me, for which I want to say thanks!
Now, I've built a small 2-oscillator monophonic subtractive synth, and I want to reorganize the internals to use the AudioProcessorGraph, as that seems like the right approach to growing this project into a more involved VST. But, after reading the documentation, looking through the source code, and trying things out in my project, I'm still confused about how to actually wire up a successful graph. For that reason, I think it'd be extremely helpful to have a simple example, just a couple lines of code, either here or in the docs to show a basic, working case.
Some questions I've been unable to answer in particular:
1) I'm confused about the AudioGraphIOProcessors. Need I make an input-mode and an output-mode AudioGraphIOProcessor and put them in my graph in order to get audio flowing correctly? Connect my input-mode node into my custom AudioProcessor modules, then sum those into the output-mode node?
1b) If the answer to the previous question is "no," then how do I control which nodes are outputting to the final buffer? Is it any nodes which have no specified output connection?
2) What's the role of the AudioProcessorPlayer? My understanding was that, in the `processBlock` method of my main plugin processor, I could just call `myGraph.processBlock` with the same buffer and all would be well. But everywhere I find information about how to use the AudioProcessorGraph, I find mention of the AudioProcessorPlayer. Have I been missing something? Need I use the player to collect the buffer from the graph?
Thanks for your help!