What is the suggested/expected way to do multi-voicing with the Synthesiser class?

So, I have a synth with voices, and I want to add a simple detune knob, but I don’t want to mess with my voicing. For instance, if I do a single voice with the detune knob up, I want all the notes I press to correctly steal the voice of the last note.

So how is something like detune normally implemented? I’m guessing I should just add the logic for it to my voice classes renderNextBlock function, but this also seems like the kind of thing that would be reinventing the wheel to do manually.

Any advice?

P.S. Sorry if any of this is jargon, really tired atm.

Alright. My plan of action is going to be to just have two angleDelta variables in my voice class, and mix them.

But I still don’t know how to handle synced vs unsynced detunes. I suppose the synth will just have some random point of relativity which offsets the starting position when sync is off. And then whenever a note is played with sync on, it’ll update that point of relativity… all seems reasonable.

What you’re saying is a bit confusing to me. And it seems to suggest that you have a different idea in mind of what detuning typically means. Let me start by saying that when people talk about a synth being detuned, it generally means something different than when people talk about a guitar or a person being detuned.

When someone says their guitar or their voice is out of tune, what they mean is that when (for example) they try to play a C note, the actual note they play might actually be halfway between C and C# or something like that. Or maybe they’re actually playing a D.

But when people talk about a detuned synth, what this means is that when you play a note on the synth, it plays multiple voices at the exact same time and one of those voices was a tiny bit higher pitch than C and the other voice was a tiny bit lower than C. Not enough to sound like two different voices, but close enough to have a pleasantly phat sound.

This sounds strange to me too. I don’t think there is such thing as a synced detune. If you get two voices into sync, they’re either in unison or in some kind of harmony. Synced means that their timing has been specifically chosen to create a harmony. Detuned means that their timing has been created to avoid a harmony.

I figured it out. I did mean the synth form of detune, two waves tuned in opposite directions, or smeared. I’m a bit confused as to what to call those two waves, voices is already a different thing to me.

In my mind, the number of voices == the number of notes you can play at a time. So using two voices for one note would break that equality.

A synced vs unsynced detune is a little switch that makes it so either both waves start at angle = 0, or they both just start wherever they would if their first cycles both started at (for instance) the beginning of initialization. The effect of this is either a sort of plucky hard attack if the two are synced (and match waves at the beginning), or the ability to keep tapping a key and hear a consistent progression of phasing.

My solution ended up being to just have an array of angleDeltas in my Voice class:

// class SmoothWaveVoice
private:
    SmoothWaveSynth* synth;
    double currentAngle[4] = {0, 0, 0, 0};
    double angleDelta[4] = {0, 0, 0, 0};
    double level = 0.0;
    ADSR gainAdsr;
    ADSR filterAdsr;
    IIRFilter filters[4];
1 Like

Ok, glad you managed to figure it out. Just thought I’d point out that it’s important to use unambiguous language when you ask for help. When I hear the word sync, I assumed you meant that the two waves would stay in sync and never slip out of phase.

1 Like