How to use SynthesiserSound

It’s been almost three years since I started work on my “monster” synthesizer, although I’ve been only working daily for about half that time.

The last few weeks I have been optimizing and reorganizing variables and components, saving about 7% of memory used while running, not counting increased speed.

My SynthVoice.h file is about 9300 lines of code (not using a cpp file). I can clearly see from other JUCE synths where source code has been public, that my length is much longer than those. But keep in mind that mine has the following features, all working smoothly;

Please feel free to skip this part of go towards the end of this post, to read my actual question.

  • 8 Tone Generators.
  • 8 “Engines” assignable to any tone generator; Wavetable (Look Up Table), Additive (Only 4 parts, but each part any type of millions of waveforms, frequency, and level), Spiral (FM type sounds), Analog Curve . Plus the following wavetable variations; Transwave (Grain type), Splitwave, Scrambler (Vertical grainer), and Wavemix Table.

Each tone generator’s engine tweakable by the following;

  • Each engine up to 16 Unison voices.
  • Adjustable phase start and end.
  • Phase Cycle Start Mode and Unison Phase Cycle Spread
  • 4 Frequency Modes; Normal FM range note related, Fixed FM range, LFO range note related, LFO fixed range.
  • 5 Portamento Modes each with or without Legato, and each with 3 frequency variations.
  • 6 Hard Sync Modes from any other tone generator.
  • Span - Real time extending a tone generator’s waveform to span into other tone generator’s waveform.
  • 20 Phase Distortion Modes some allowing special adjustment, all with Unison Spread adjustment, and adjusting level (Two modes allows for Pulse Width on any waveform).
  • Unison Modulation; 4 AM Modes, RM, ARM (AM + RM), FM, FM Feedback 1, FM Feedback 2, Feedback SFX, and PM.
  • Unison Fidelity tweaking for special sounds/effects.
  • 4 Noise Modes which can be set to effect either unison voices individually or combined.
  • Tone Generator Modulation (Combined Unison voices); 4 AM Modes, RM, and ARM.
  • Dynamic Mixer between any two other tone generators, off course assignable to envelope and/or LFO.
  • 5 Morphing Types (Morphing from one tone generator to another), each 6 frequency mode, plus adjustable rate.

8 Envelopes - Six stages, each with adjustable modulation (Rate and level), and with start and end levels adjustable for inverted envelope types.

8 Effect Slots - Assignable to one of the following effect modules and assignable to any tone generator(s); Bitcrusher (6 Types), Chorus (1-5 Stages), Stereo Delay, Distortion (34 Types) each with a type of cross modulation to any other tone generator, and most of them with 5 cross source variations, Flanger, Phaser (2-32 Stages), Reverb, Stereorizer (3 Types), and Waveshaper (37 Types) each with a type of cross modulation to any other tone generator. Most effects has it’s own filtering of wet level (LP, BP, HP, AP, and Comb). Bitcrusher, Distortion, and Waveshaper can each be assigned on a Voice Unison Level, Voice Combined Unison (Note) Level, Tone Generator Level (Combined tone generator notes), or Global (Combined notes and tone generators).

8 LFO’s - Each with simple 50000+ Waveform creator, Depth assignable to any envelope.

8 Filter’s - Each with Low Pass, Band Pass, Dual Band Pass, High Pass, each from 6db to 36db, plus 12 stage All Pass, and 12 stage Comb. Each filter can be assigned to any on or multiple tone generators. Envelope and/or LFO assignable to cutoff, resonance, plus dry/wet mix.

Most parameters can be assigned to any envelope and/or any LFO.

Except for the Chorus, Delay, Flanger, Phaser, and Stereorizer which code resides in PluginProcessor, all code is in SynthVoice.

I do not use SynthesiserSound / SynthSound.h at all. I read in the description of the SynthesiserSound class that; “Describes one of the sounds that a Synthesiser can play”.

Now the following is meant as a joke.

Does this mean I should put a text String in my SynthSound.h file writing “Yeah this is a cool saw buzz type sound, that can be manipulated in a gazillion ways, jadi, jadi, jadi…”?

No off course not! So can anyone please give me an example, a link perhaps, to what I can use SynthSound for? From the class description it does not seem I can put any of my sound creation code in it, or can I?

Also without having seen my code, is there any other obvious ways I should reduce my SynthVoice code length? Some DSP code are shared in a separate class file, since it can be used by both SynthVoice and PluginProcessor, but back when I did so it did not reduce the memory consumption of my plugin, perhaps because all external functions where inlined into SynthVoice.h?

Here is “old” Youtube video demonstration, but keep in mind I have done a lot since!

1 Like

Actually the answer to both questions is sort of the same.

I agree, that 9300 lines of code in one file is too much (and would recommend separating the implementation in a cpp file to reduce compile time).
I usually start re-thinking my architecture when I approach a 1000.
A good read up on this subject is the Single Responsibility Principle and keeping every class in a seperate h/cpp file.

In your case the engines, tone generators, fx, modulators, could/should all be separate classes.
The SynthesiserSound could (and logically would) contain your tone generators. Possibly separate ones for every engine type.

If we take a piano as an analogy then the sounds are basically all the 88 keys and the voices are the amount of fingers you have (as in the amount of sounds that can be played back simultaneously).

Every SynthesiserVoice has the possibility to ‘accept’ which sounds it plays back using the bool canPlaySound (SynthesiserSound *) method. That way you can differentiate between the ‘engines’.

Thank you very much for replying. Since my original post, I have been able to reduce lines of code by 2800!

What confuses me about SynthesiserSound is this description on the Juce class page;

" The SynthesiserSound is a passive class that just describes what the sound is - the actual audio rendering for a sound is done by a SynthesiserVoice. This allows more than one SynthesiserVoice to play the same sound at the same time.".

Especially “…passive class that just describes what the sound is…”.

The SynthesiserSound does no rendering. It just ‘describes’ what sound is currently playing via polymorphism.
For instance a SamplerSound is derived from a SynthesiserSound and manages the sample data, which will be requested by the voice when it’s rendering.

What basically happens is that the Synthesiser will deduce what sound should be playing using the appliesToNote() and appliesToChannel() methods. It will then look for a free voice (or steal one) that is able to playback the sound.

So in that sense the sound ‘describes’ what it stands for.
In your case you could have a class called WavetableSound derived from SynthesiserSound that holds the wavetable.
Or a derived class called AdditiveSound that only implements the pure virtual methods appliesToNote() and appliesToChannel().

In combination you would create WavetableVoice and AdditiveVoice classes (derived from SynthesiserVoice that would actually do the rendering via renderNextBlock().
Those voices would implement the canPlaySound() method for instance by dynamic casting:

bool WavetableVoice::canPlaySound(SynthesiserSound* sound) override 
{ 
    return dynamic_cast<WaveTableSound*>(sound) != nullptr; 
}

This will also help in reducing overall file size when you give all these voice and sound sub-classes separate files.