Questions About Synths (Oscillators)

Hello,
 

So I've finally made my first oscillator in C++ and JUCE and I want to end up making more oscillators so I can make a very simple synth. So my first question is, how would you make many different oscillators to choose from? Would you create different SynthesizerVoices and SynthesizerSounds for each different oscillator and somehow choose between them? If so, how would you select between these different oscillators, maybe using a ComboBox or something? 

Right now I'm just talking about using one oscillator at a time. How would you use several oscillators at a time though? I wouldn't think that you'd create different voices for each, but I really don't know. 

Sorry for all of the questions lately. I'm really eager to learn. :)

I’d probably use an AudioProcessorGraph.

Rail

Could you explain that a little? Like maybe why you'd do that? I'm really trying to figure out how things are in JUCE. In Reaktor I didn't even have to think about things like this. I would just like make an oscillator, route it's output to a selector I made, route a control to the selector, and that was about it. Ha. I'd really like to figure out how synths, voices, etc. work with JUCE and the ideal way of working with them. I've read a lot of posts and I've seen that AudioProcessorGraph seems to be the recommended way of routing the signals. I suppose I should be trying that out.

So if I did use one, how would I do that? Like would I make different Synthesizer objects for the different oscillators I want to run together? Do you use different voices for different oscillators that will run together? I've tried scouring the forum for this but I haven't come across anything yet. Does anyone know of any examples synths that use oscillators in an ideal way? I would really appreciate any help y'all could give me. I'd really like to get past this hump and continue making more oscillators. :)

Create a separate AudioProcessor for each oscillator… you can chain them together in any order using the AudioProcessorGraph (see the Host demo)… the only issue is that the AudioProcessorGraph is poorly written (sorry Jules) from a performance standpoint… and become exponentially slower as you add more nodes. If you search the forum you’ll find a version modified by Graeme which adds nodes much faster (removing individual nodes is still slow). I’ve tested his mod extensively and it works very well.

Rail

Thanks man. I'll definitely try his mods out when I figure out how to use the AudioProcessorGraph. I want to try and change the code I currently have (2 oscillators and a multi-type filter) to use an AudioProcessorGraph. How would I get the AudioProcessorGraph to take the plugin's input/output? I'm new to JUCE and I'm just use to using the default AudioProcessor, which already streams audio. Also, where exactly would I handle the AudioProcessorGraphs code at? Like where would I call the functions (addNode(), etc)? Do you know of an example that has a very simple implementation of it, or maybe have one yourself? I've tried understanding the plugin host demo, but it just has too much going on and it's hard for me to understand at this point. I'll keep searching the forums, but haven't found anything useful so far. Just a simple example that can take audio input and connects one or two AudioProcessors would probably be enough for me. Maybe demonstrating connecting to an AudioProcessorEditor would be helpful too. I'm probably thinking of the AudioProcessorGraph completely wrong. Idk, I need to see a simple example of it being used in a plugin.

Well I found this post from a while back that has an example of using AudioProcessorGraph.

http://www.juce.com/forum/topic/audioprocessorgraph-vst-plugin

What do you think about that example? So my idea of it was totally wrong based on this example. It seems like you create a "main" AudioProcessor, create an AudioProcessorGraph object in it, and set it up, add nodes, etc. from there. Is that right? Then I would just create an AudioProcessorEditor for that "main" processor. If that's the case, I think I might be able to get something working. 

I'm still curious how I would select between different oscillators in a single Synthesizer object. I'm thinking that I could just create one SynthesizerVoice, implement all of my different oscillator types in that (within switch statement cases), and create a member function (setType() or something) that will change the index of the switch statement and the  member variables of needed. 

I think once I find a good structure, I'll be able to stick with it and make some neat synthesizers. I guess figuring out how everything can work together is the hardest part, to me anyways. 

Thank you for mentioning AudioProcessorGraph. I really look forward to figuring that out because when I think of a good synth, I think of flexible routing, and that seems like the way to go. I'll just have to experiment with it and look for more forum posts, and code examples of I can find any that I can understand. Hopefully someone will chime in and help a noob figure this out. :)

One way is to override SynthesiserSound to create descriptor structs for the sounds you want the synth to play, then you can add and remove them from your Synthesiser object dynamically using the clearSounds() and addSounds() methods. You can reference the sounds in your SynthesiserVoice subclass and use a switch to render the individual waveshapes. I just wrote a wrapper class for a Synthesiser object with a setWaveType method and some other utility methods to pipe parameters through to the voices. You could probably do this with less code by deriving from the Synthesiser class and hacking in a setWaveType() function.