Dsp::chorus not producing stereo information

Hi, in my plugin I’ve got a universal “width control” that affects a chorus at the end of a processor chain:

    void setWidth(float w)
    {
        auto rate =     juce::jmap(w, 0.f, 1.f, 0.5f, 3.f);
        auto depth =    juce::jmap(w, 0.f, 1.f, 0.f, 0.25f);
        auto delay =    juce::jmap(w, 0.f, 1.f, 5.f, 8.f);
        auto mix =      juce::jmap(w, 0.f, 1.f, 0.f, 0.7f);
        auto& chorus = chain.get<chorusIndex>();
        chorus.setRate(rate);
        chorus.setDepth(depth);
        chorus.setCentreDelay(delay);
        chorus.setMix(mix);
    }

However when I test in my DAW using a M/S EQ, I see no stereo information being registered. I can hear the phasing and bouncing of the frequencies but it is all coming out in the middle. Does anyone know if this is the intended output? I don’t see any documentation about specifically “allowing” stereo voices in synthesisers or stereo processors in processor chains so i do not think the signal is getting summed further down the chain.

This is the full Processor Chain for monophonic synth voice:

//harmonic oscillator has the chorus at the end
    juce::dsp::ProcessorChain<  HarmonicOscillator,
                                Envelope,
                                juce::dsp::Gain<float>,
                                juce::dsp::WaveShaper<float>,
                                juce::dsp::Gain<float>> processorChain;

The only other thing I can think is the process block of the harmonic oscillator:

    void process (const ProcessContext& context) noexcept
    {
        auto outBlock = context.getOutputBlock();
        auto numSamples = outBlock.getNumSamples();
        
        mainChain.process(context);
        
        auto block = tempBlock.getSubBlock (0, (size_t) numSamples);
        block.clear();
        juce::dsp::ProcessContextReplacing<float> ctx (block);
        harmonicChain.process (ctx);
        
        outBlock
            .getSubBlock ((size_t) 0, (size_t) numSamples)
            .add (tempBlock);
    }

I would assume that this “add()” operation retains stereo information but I do not know at this point.

Any help would be much appreciated,
Cheers

phaser and chorus dont generate stereo: they modify left and right equally.
I personally use the phaser, 2 x mono, and use a modified speed for the left and right version. that makes it generative stereo

1 Like

you can implement modulation effects to be stereo by giving the lfo of the right channel a phase offset of up to 180°

Ah I see, I guess that makes sense but its a shame theres no interface to modify directly in the Chorus class.

Do you have some sort of code snippet this you can show on how to do this?

i like that the class forces you to get a little creative. i wouldn’t want every new plugin release to sound the same

Yeah I get that, but I’m unsure of how you’d actually split a dsp::Chorus into 2 mono dsp::Chorus devices and affect them differently whether that be giving the lfo a phase offset or different rate etc. Would you derive from dsp::chorus and override the process function (i don’t think that possible)? or copy and paste the source code and modify it from there? Just wondering about the options.

I just use 2 instances of phaser with different settings to get it producing interesting stere, the chorus will work the same
In the example bbufL & bbufR are 2 AudioBuffer<float> 's
Each audiobuffer has 1 channel

          juce::dsp::AudioBlock<float> inBlockL ( bbufL);
          juce::dsp::AudioBlock<float> inBlockR ( bbufR);
          juce::dsp::ProcessContextReplacing<float> contextL (inBlockL);
          juce::dsp::ProcessContextReplacing<float> contextR (inBlockR);
          phaserL.process    (    contextL   );
          phaserR.process    (    contextR   );

I used the chorus before and had one channel setting the depth to a negative amount. It throws an assertion but actually it works fine.

In the future though I would probably copy paste the chorus and make a more complex one out of it.

1 Like

Here is the implementation I settled on if anyone is curious:

class Chorus
{
public:
    void prepare (const juce::dsp::ProcessSpec& spec)
    {
        reset();
        for (auto& ch : processors)
            ch.prepare({spec.sampleRate, spec.maximumBlockSize, 1});
    }
    
    template <typename ProcessContext>
    void process (const ProcessContext& context) noexcept
    {
        juce::dsp::ProcessContextReplacing<float> ctx(context);
        juce::dsp::AudioBlock<float> leftBlock = ctx.getOutputBlock().getSingleChannelBlock(0);
        juce::dsp::AudioBlock<float> rightBlock = ctx.getOutputBlock().getSingleChannelBlock(1);
        
        processors[0].process(juce::dsp::ProcessContextReplacing<float>(leftBlock));
        processors[1].process(juce::dsp::ProcessContextReplacing<float>(rightBlock));
    }
    
    //==============================================================================
    void reset() noexcept
    {
        for (auto& ch : processors)
            ch.reset();
    }
    void setWidth(float w)
    {
        auto rate =     juce::jmap(w, 0.f, 1.f, 0.5f, 3.f);
        auto depth =    juce::jmap(w, 0.f, 1.f, 0.f, 0.25f);
        auto delayL =    juce::jmap(w, 0.f, 1.f, 5.f, 7.f);
        auto delayR =    juce::jmap(w, 0.f, 1.f, 5.5f, 9.f);
        auto mix =      juce::jmap(w, 0.f, 1.f, 0.f, 0.7f);
        for (auto& ch : processors)
        {
            ch.setRate(rate);
            ch.setDepth(depth);
            ch.setMix(mix);
        }
        processors[0].setCentreDelay(delayL);
        processors[1].setCentreDelay(delayR);

    }
private:
    std::array<juce::dsp::Chorus<float>, 2> processors;
};
1 Like