Clicking sound at beginning and end of MIDI notes using juce_dsp

I’m developing a synthesizer using the juce_dsp module. Right now I have an Oscillator and an ADSR. They seem to be working fine, except there is a distinctly noticeable clicking sound whenever a MIDI note ends, regardless of the ADSR parameters. Below is my SynthVoice class.

class SynthVoice : public SynthesiserVoice
{
public:
    SynthVoice::SynthVoice(int type)
    {
        waveType = type;
    }

    bool canPlaySound(SynthesiserSound* sound) override
    {
        return dynamic_cast<SynthesiserSound*>(sound) != nullptr;
    }

    void startNote(int midiNoteNumber, float velocity, SynthesiserSound* sound, int currentPitchWheelPosition)
    {
        if (waveType == sine)
            sinOsc.setFrequency(MidiMessage::getMidiNoteInHertz(midiNoteNumber));
        else if (waveType == triangle)
            triangleOsc.setFrequency(MidiMessage::getMidiNoteInHertz(midiNoteNumber));
        else if (waveType == saw)
            sawOsc.setFrequency(MidiMessage::getMidiNoteInHertz(midiNoteNumber));
        else if (waveType == square)
            squareOsc.setFrequency(MidiMessage::getMidiNoteInHertz(midiNoteNumber));
        adsr.noteOn();
    }

    void stopNote(float velocity, bool allowTailOff)
    {
        adsr.noteOff();
    }

    void pitchWheelMoved(int newPitchWheelValue)
    {

    }

    void controllerMoved(int controllerNumber, int newControllerValue)
    {

    }

    void prepareToPlay(double sampleRate, int samplesPerBlock, int outputChannels)
    {
        juce::dsp::ProcessSpec spec;
        spec.maximumBlockSize = samplesPerBlock;
        spec.sampleRate = sampleRate;
        spec.numChannels = outputChannels;

        sinOsc.prepare(spec);
        triangleOsc.prepare(spec);
        sawOsc.prepare(spec);
        squareOsc.prepare(spec);

        gain.prepare(spec);

        adsr.setSampleRate(sampleRate);
    }

    void renderNextBlock(AudioBuffer<float>& outputBuffer, int startSample, int numSamples)
    {
        juce::dsp::AudioBlock<float> audioBlock{ outputBuffer };

        if (waveType == sine)
            sinOsc.process(juce::dsp::ProcessContextReplacing<float>(audioBlock));
        else if (waveType == triangle)
            triangleOsc.process(juce::dsp::ProcessContextReplacing<float>(audioBlock));
        else if (waveType == saw)
            sawOsc.process(juce::dsp::ProcessContextReplacing<float>(audioBlock));
        else if (waveType == square)
            squareOsc.process(juce::dsp::ProcessContextReplacing<float>(audioBlock));

        gain.setGainLinear(gainValue);
        gain.setRampDurationSeconds(1);
        gain.process(juce::dsp::ProcessContextReplacing<float>(audioBlock));

        adsrParams.attack = atk;
        adsrParams.decay = dcy;
        adsrParams.sustain = sus;
        adsrParams.release = rls;
        adsr.setParameters(adsrParams);
        adsr.applyEnvelopeToBuffer(outputBuffer, startSample, numSamples);
    }

    enum waveTypes
    {
        sine = 1,
        triangle,
        saw,
        square
    };

    float gainValue = 0.1f;
    float atk = 0.01f;
    float dcy = 10.0f;
    float sus = 1.0f;
    float rls = 0.01f;

    int waveType = sine;

private:
    juce::ADSR adsr;
    juce::ADSR::Parameters adsrParams;

    dsp::Oscillator<float> sinOsc{ [](float x) { return std::sin(x); } };
    dsp::Oscillator<float> triangleOsc{ [](float x) { return abs(x / MathConstants<float>::pi); } };
    dsp::Oscillator<float> sawOsc{ [](float x) { return x / MathConstants<float>::pi; } };
    dsp::Oscillator<float> squareOsc{ [](float x) { return x < 0.0f ? -1.0f : 1.0f; } };

    dsp::Gain<float> gain;
};
1 Like

Just a guess, but it’s possible that your gain ramp is resetting when you call setRampDurationSeconds()? Because you’re setting this every renderNextBlock() callback, I think it’s possible that if both the gain & the adsr are in their release phases at the end of one callback, but not at zero yet, then the gain resets and jumps back to 1 when the next callback is made?

Are clicks still present if you move all your gain & ADSR setup stuff into your prepareToPlay()?

Try it with just this much in your renderNextBlock:

        juce::dsp::AudioBlock<float> audioBlock{ outputBuffer };

        if (waveType == sine)
            sinOsc.process(juce::dsp::ProcessContextReplacing<float>(audioBlock));
        else if (waveType == triangle)
            triangleOsc.process(juce::dsp::ProcessContextReplacing<float>(audioBlock));
        else if (waveType == saw)
            sawOsc.process(juce::dsp::ProcessContextReplacing<float>(audioBlock));
        else if (waveType == square)
            squareOsc.process(juce::dsp::ProcessContextReplacing<float>(audioBlock));

        gain.process(juce::dsp::ProcessContextReplacing<float>(audioBlock));

        adsr.applyEnvelopeToBuffer(outputBuffer, startSample, numSamples);

Just tried it. Clicks are still present with that much in renderNextBlock and the rest in prepareToPlay.

interesting…

I see that all you’re doing in noteOff is calling adsr.noteOff()… so maybe try adding this surrounding the rest of your renderNextBlock:

renderNextBlock()
{
    if (adsr.isActive())
    {
         // the rest of your renderNextBlock() code here
    }
}

Tried it, still no luck.

Interesting. I’m not sure then…

Could it be a problem outside of the SynthVoice class? This is my PluginProcessor processBlock method, when I call synth.renderNextBlock, I pass in the startSample as 0. From reading other threads, that seems to be a common cause of the problem, but I’m not sure what else to put there.

void ArbitraryAudioProcessor::processBlock (juce::AudioBuffer<float>& buffer, juce::MidiBuffer& midiMessages)
{
    buffer.clear();

    for (int i = 0; i < synth.getNumVoices(); ++i)
    {
        if (auto voice = dynamic_cast<SynthVoice*>(synth.getVoice(i)))
        {
            voice->gainValue = gain;
            voice->waveType = currentWaveType;
        }
    }

    synth.renderNextBlock(buffer, midiMessages, 0, buffer.getNumSamples());
}

in the processBlock code you just posted, is synth an instance of the Synthesiser base class, or a SynthesiserVoice ?

synth is an instance of the Synthesiser base class.

then the way you’re calling renderNextBlock should be correct

something that does pop out is that the pointer dynamic_cast in a for loop that you’re using to set gain & wave type is pretty nasty…

I would take that and make a function inside your synthesiser inherited class, so that you can do

synth.updateGainValue(gain);
synth.updateWaveType(currentWaveType);

inside your synth class, these might look something like this:

void MySynthesiserClass::updateGainValue(newGainVal)
{
    for (auto* voice : voices)
        voice-> gainValue = newGainVal;
}

void MySynthesiserClass::updateWaveType(newWaveType)
{
    for (auto* voice : voices)
        voice-> waveType = newWaveType;
}

I didn’t have a synthesiser inherited class beforehand, but I’ve made one now. The problem is that the SynthesiserVoice class does not have gainValue or waveType, those are from my inherited SynthesiserVoice class. My workaround would be to use the pointer dynamic_cast in a for loop, but that’s exactly what you’ve told me not to do. Current code in Synthesiser inherited class:

class ArbSynth : public Synthesiser
{
public:
    void ArbSynth::updateGainValue(float newGainVal)
    {
        for (auto* voice : voices)
            voice->gainValue = newGainVal; //Gives error because voice doesn't have gainValue
    }

    void ArbSynth::updateWaveType(int newWaveType)
    {
        for (auto* voice : voices)
            voice->waveType = newWaveType; //Gives error because voice doesn't have waveType
    }
};

With workaround:

class ArbSynth : public Synthesiser
{
public:
    void ArbSynth::updateGainValue(float newGainVal)
    {
        for (auto* v : voices)
        {
            if (auto voice = dynamic_cast<SynthVoice*>(v))
            {
                voice->gainValue = newGainVal;
            }
        }
    }

    void ArbSynth::updateWaveType(int newWaveType)
    {
        for (auto* v : voices)
        {
            if (auto voice = dynamic_cast<SynthVoice*>(v))
            {
                voice->waveType = newWaveType;
            }
        }
    }
};

oh, I see…

well, at least if you’re doing this within the Synth inherited class, you can easily stay within the ranged-for syntax referencing the voices array, and keep your processBlock looking simpler with synth.updateGainVal(newGainVal) ¯_(ツ)_/¯

I suppose. Thanks for your help and quick responses.

I suspect the issue you’re having is because you need to add the output of each voice to the buffer in renderNextBlock instead of replacing the contents of the buffer.

1 Like

How would I do that? Would it be something to do with the SynthVoice’s renderNextBlock, the Synthesiser’s renderNextBlock, or the PluginProcessor’s processBlock?

ah, good catch @alibarker1 !!

you could make an internal “synthesisBuffer” within your SynthVoice, do your processing on that internal buffer, then at the end of your renderNextBlock you do

for (int chan = 0; chan < oututBuffer.getNumChannels(); ++chan)
    outputBuffer.addFrom(chan, startSample, synthesisBuffer, 0, startSample, numSamples, 1.0f); 

the second startSample is the start sample # in the source buffer you’re copying FROM, in this case your internal SynthesisBuffer. So here, you could either start from sample 0 every renderNextBlock, and render the output to your synthesisBuffer from sample indices 0 to numSamples-1, in which cse you’d make this second startSample argument here always 0. Or you can do it like this, which is assuming the output will be written to sample indices startSample to startSample + numSamples - 1 for each renderNextBlock.

^ this is all in your synthVoice class btw

and also be careful about what channel of the SynthesisBuffer you read from, you may just make a mono output, 1 channel synthbuffer that gets copied to every channel of the output buffer

I’m a bit lost on how to make the synthesisBuffer, do I just declare an AudioBuffer like so?

AudioBuffer<float> synthesisBuffer;

And what scope should I declare it in? The renderNextBlock method, or the class?

The class. You never want to allocate memory during an audio rendering function, so you should make a synthesisBuffer like you did, as a private member of your SynthVoice class.

In your SynthVoice’s prepareToPlay function, you should call

synthesisBuffer.setSize(1, newBlocksize, true, true, true);