Newbie, delayed signal is distorted idk why

Hello,
I am trying to create Chorus vst plugin as a school project and i have this issue.
This is how my processBlock looks like currently. The chorusEf parameters are in range from 0 to 1. I know there are a lot of mistakes and I will fix them later. My main problem is that for some for me unknown reasons the delayed signal is heavily distorted. I am trying to figure that out for god 8 or so hours and my mind is probably gonna explode from all of that.
Anyone can help me with that?

    float delayInSamples = getSampleRate();
    const int bufferSize = 44100*2; 
    static float circbuffer[bufferSize] = { 0 };
    static int writeIndex = 0;

    for (int channel = 0; channel < buffer.getNumChannels(); ++channel) {

        float* channelData = buffer.getWritePointer(channel);
       
        for (int i = 0; i < buffer.getNumSamples(); ++i) {
            float delayedSample = circbuffer[(int)(writeIndex + bufferSize - delayInSamples ) % bufferSize];
            float input = channelData[i] + chorusEf.feedbackParam * delayedSample;
            circbuffer[writeIndex] = input;
            writeIndex = (writeIndex + 1) % bufferSize;
            channelData[i] = chorusEf.mixParam * delayedSample + (1.0f - chorusEf.mixParam) * channelData[i];
          
        }
    }
    

You are using a single array of floats for multiple channels. Also, making it static won’t help.
Make your buffer a private member of your class but, instead of using a bare array of floats, use AudioBuffer or, if you want to stick with the stl, a simple array of two vectors.

Then, init them in prepareToPlay with the right amount of channels (in case you’ll use AudioBuffer) and expected samples per block.

I tried that with AudioBuffer class. To get the samples i did circbuffer.getSample(channel, (int)(writeIndex + bufferSize - delayInSamples ) % bufferSize) and to write into the buffer i did this circbuffer.AddSample(channel, writeIndex, input)
Same issue. The delayed signal is heavily distorted

Don’t use addSample, that sums into your buffer, try setSample instead. Also remember to do a .clear on the buffer in prepareToPlay, the Juce AudioBuffers are not initialised silent automatically.

Still same issue :/. What i found out is that on the left channel the delayed signal is completely fine but on the right channel thats where it is distorted.

To me, it still sounds like the description of the #1 most common programming mistake that we see on the forum , that has been mentioned before

1 Like
    float delayTime = chorusEf.feedbackParam; // in milliseconds

    float delayInSamples = (delayTime / 1000.0f) * getSampleRate();

    static int writeIndex = 0;

    for (auto i = totalNumInputChannels; i < totalNumOutputChannels; ++i)
        buffer.clear(i, 0, buffer.getNumSamples());

    for (int channel = 0; channel < totalNumInputChannels; ++channel) {

        float* channelData = buffer.getWritePointer(channel);
       
        //DBG(&buffer.getWritePointer(channel));
       
        for (int i = 0; i < buffer.getNumSamples(); ++i) {
            
            float delayedSample = circbuffer[(writeIndex + bufferSize - static_cast<int>(delayInSamples)) % bufferSize][channel];

            float input = channelData[i] + 0*delayedSample;
            circbuffer[writeIndex][channel] = input;
            writeIndex = (writeIndex + 1) % bufferSize;

            channelData[i] = chorusEf.mixParam * delayedSample + (1.0f - chorusEf.mixParam) * channelData[i];
           // DBG("channel: " << channel << "data: " << channelData[i]);
        }
       
    }

This is how i changed the code. I am using 2D circbuffer to separate the channels. It works really weirdly. The delaye itself works. It delayes the input signal properly, but the delayed signal is distorted. When i change the delaye time on some values theres even more repetation of the delayed signal. It will play the input signal like 5 times in a row and etc. Could this have anything to do with aliasing?

maybe you should try interpolating the delay next. instead of casting delayInSamples to int, you would cast everything else to float or double, or whatever that is, and use this non-integer value to come up with samples in between of the actual samples of the ring buffer, which is interpolation. start simple with linear interpolation. it will have a significant impact on the sound quality. then go on with cubic hermite spline for a prestine sound quality. another issue i can see here is that delayInSamples is not a buffer but a single value. that means for each block there can only be one delay. but that means when you change the delay time, there can not be a smooth transition, but only a steppy one, and if those steps come per block then they are extremely fast, which would explain the grainy nature of the artefacts you experienced. so this would also be something that needs to be addressed. and if both of these things are implemented correctly i think you should have something that already sounds really good

This being static looks very odd to me

1 Like

just to explain this to OP: dsp stuff should never be static (unless you go for super glitched out unstable madness that is probably gonna crash) because everything that is static is shared across all instances of the plugin in your host, or at least all instances that are in the same process, which can depend on the DAW and where the plugins are in the project etc.

Short answer: Just make your life easy rn and don’t use static.

In OP’s code the static writeIndex is already shared between the stereo channels processing, so I don’t suppose it’s going to work, even before getting to the multiple plugin instances issue.

1 Like