In the process of writing my own SoundFont parser/player (SFZero had some major flaws), I managed to get audio playback happening without using any of the stuff SynthesiserSound is there for. What is this class even for? It’s friends with SynthesiserVoice so that SynthesiserVoice can access all of its members. But you can just the same put all of those members into the SynthesiserVoice class directly.
So, what is the point of this class? Was there some idea that you’d have a Synthesiser instance that could load multiple SynthesiserSounds that work with different synth voice classes? Most synth plugins just have a single per-voice architecture model that all of their presets make use of, no? So, what is the point of this SynthesiserSound class, if we can shove all of our architecture into the Voice class itself with no loss of functionality.
Is there a solid example of a juce::Synthesiser that has multiple SynthesiserVoice implementations that require separate SynthesiserSound classes? Or does everyone end up writing their synths to only support one SynthesiserSound, and thus only need one SynthesiserVoice class?