Hi, in my plugin I’ve got a universal “width control” that affects a chorus at the end of a processor chain:
void setWidth(float w)
{
auto rate = juce::jmap(w, 0.f, 1.f, 0.5f, 3.f);
auto depth = juce::jmap(w, 0.f, 1.f, 0.f, 0.25f);
auto delay = juce::jmap(w, 0.f, 1.f, 5.f, 8.f);
auto mix = juce::jmap(w, 0.f, 1.f, 0.f, 0.7f);
auto& chorus = chain.get<chorusIndex>();
chorus.setRate(rate);
chorus.setDepth(depth);
chorus.setCentreDelay(delay);
chorus.setMix(mix);
}
However when I test in my DAW using a M/S EQ, I see no stereo information being registered. I can hear the phasing and bouncing of the frequencies but it is all coming out in the middle. Does anyone know if this is the intended output? I don’t see any documentation about specifically “allowing” stereo voices in synthesisers or stereo processors in processor chains so i do not think the signal is getting summed further down the chain.
This is the full Processor Chain for monophonic synth voice:
//harmonic oscillator has the chorus at the end
juce::dsp::ProcessorChain< HarmonicOscillator,
Envelope,
juce::dsp::Gain<float>,
juce::dsp::WaveShaper<float>,
juce::dsp::Gain<float>> processorChain;
The only other thing I can think is the process block of the harmonic oscillator:
void process (const ProcessContext& context) noexcept
{
auto outBlock = context.getOutputBlock();
auto numSamples = outBlock.getNumSamples();
mainChain.process(context);
auto block = tempBlock.getSubBlock (0, (size_t) numSamples);
block.clear();
juce::dsp::ProcessContextReplacing<float> ctx (block);
harmonicChain.process (ctx);
outBlock
.getSubBlock ((size_t) 0, (size_t) numSamples)
.add (tempBlock);
}
I would assume that this “add()” operation retains stereo information but I do not know at this point.
Any help would be much appreciated,
Cheers
