How to remove digital artifcats in a simple delay plugin?


#1

I’ve made a simple delay plugin with a variable delay length (10-1000ms). I’ve noticed when adjusting the delLength in RT it produces some annoying distortion. I’m going to try adding some interpolation to move more smoothly between the adjusted delLength values, but, I wanted to post my algorithm to see if anyone had any thoughts as to why it might be distorting/possible approaches to fix it:

void OutputCollinDelayAudioProcessor::applyDelay (AudioBuffer<float>& buffer, AudioBuffer<float>& delayBuffer)
{

    const int numSamples = buffer.getNumSamples();
    int dpr, dpw;

    int sampleRate = getSampleRate();
    
    const float feedback = *parameters.getRawParameterValue ("feedback");
    const float dryWetRatio = *parameters.getRawParameterValue ("wetDry");
    const float delLength = *parameters.getRawParameterValue ("time");

    
    delayReadPosition_ = (int)(delayWritePosition_ - (delLength * sampleRate) + delayBufferLen_) % delayBufferLen_;
    
    float dryMix = (1.0-dryWetRatio);
    float wetMix = dryWetRatio;
    
    for (int channel = 0; channel < getTotalNumOutputChannels(); ++channel)
    {
        
        dpr = delayReadPosition_;
        dpw = delayWritePosition_;
        
        float* const channelData = buffer.getWritePointer (channel);
        float* delayData = delayBuffer_.getWritePointer (jmin (channel, delayBuffer_.getNumChannels() - 1));
    

        for (int i = 0; i < numSamples; ++i)
        {
            const float in = channelData[i];
            float out = 0.0;
            out = (dryMix * in + wetMix * delayData[dpr]);
            delayData[dpw] = in + (delayData[dpr] * feedback);
            ++dpr %= delayBufferLen_;
            ++dpw %= delayBufferLen_;
            channelData[i] = out;
        }
  
    }
    delayReadPosition_ = dpr;
    delayWritePosition_ = dpw;
}

#2

Looks ok to me.
When you change the delay length you get discontinuities in the signal. So you might consider a block of crossfade between the old and the new read position, when the delay length changed for the new block.

A performance tip: try if you can rewrite the loop using AudioBuffer::copyFrom(…) etc.


#3

Yes, try using big vector functions to copy data. One way for this is to compute how much of your last delayData you need. I have 2 pages coming up in a book on this, as it’s tricky but perfectly doable for such a “small” plugin.
And this will get rid of the “%” call that can be worse than an if (as the if will be taken seldomly, the predictor should just make a bad prediction the first time, the rest of the time, it should perform faster than a modulo call).


#4

Point 1:
Slightly confused, do you mean that instead of getting the new read position to whatever the new slider value is, I should set it to previous read position? I’m not sure how to crossfade them as the algorithm already increments the read position as it moves along.

Point 2: I updated the algorithms a bit:

int dpr, dpw;
int sampleRate = getSampleRate();

const float feedback = *parameters.getRawParameterValue ("feedback");
const float dryWetRatio = *parameters.getRawParameterValue ("wetDry");
const float delLength = *parameters.getRawParameterValue ("time");

delayReadPosition_ = (int)(delayWritePosition_ - (delLength * sampleRate) + delayBufferLen_) % delayBufferLen_;

float dryMix = (1.0-dryWetRatio);
float wetMix = dryWetRatio;

dpr = delayReadPosition_;
dpw = delayWritePosition_;

for (int channel = 0; channel < getTotalNumOutputChannels(); ++channel)
{
    dpr = delayReadPosition_;
    dpw = delayWritePosition_;
    const float *channelData = buffer.getReadPointer (channel);
    
    float* delayData = delayBuffer_.getWritePointer (jmin (channel, delayBuffer_.getNumChannels() - 1));
    
    float* wetData = wetBuffer_.getWritePointer (channel);
    

    for (int i = 0; i < wetBuffer_.getNumSamples(); ++i)
    {
        delayData[dpw] = channelData[i] + (delayData[dpr] * feedback);
        wetData[i] = delayData[dpw];
        ++dpr %= delayBufferLen_;
        ++dpw %= delayBufferLen_;
    }
    buffer.applyGain(channel, 0, buffer.getNumSamples(), dryMix);
    wetBuffer_.applyGain(channel, 0, wetBuffer_.getNumSamples(), wetMix);
    buffer.addFrom(channel, 0, wetBuffer_, channel, 0, buffer.getNumSamples());
}
delayReadPosition_ = dpr;
delayWritePosition_ = dpw;

I managed to get those three operations out of my loop but I’m not sure how to use the copyFrom function to replace the delay calculations. Wound’t I need to chain together some a few more adds and apply gains to calculate the delay?


#5

Point 1:
You compute the read position:

delayReadPosition_ = (int)(delayWritePosition_ - (delLength * sampleRate) + delayBufferLen_) % delayBufferLen_;

You can see, if the time was changed like this:

int newReadPosition = (int)(delayWritePosition_ - (delLength * sampleRate) + delayBufferLen_) % delayBufferLen_;
bool needCrossfade = newReadPosition != delayReadPosition_;

This way you can use the needCrossfade later as indicator, that you have to process your buffer twice, once with the delayReadPosition_ and fading out, and a second time adding with newReadPosition and fading in.
When you are finished you set delayReadPosition_ = newReadPosition;

Point 2:
You can do everything blockwise:

const int numSamples = buffer.getNumSamples();
// copy to delayBuffer_
delayBuffer_.copyFrom (channel, dpw, buffer, channel, 0, numSamples);
// fetch from delay buffer
buffer.copyFrom (channel, 0, delayBuffer_, channel, dpr, numSamples);
// feedback
delayBuffer_.addFrom (channel, dpw, buffer, channel, 0, numSamples);

Obviously you have to check, when you read or write behind the end of delayData. In that case you have to split into two copy calls:

const int beyond = dpw + numSamples - delayBuffer_.getNumSamples();
if (beyond > 0) {
    delayBuffer_.copyFrom (channel, dpw, buffer, channel, 0, numSamples-beyond);
    delayBuffer_.copyFrom (channel, 0, buffer, channel, numSamples-beyond, beyond);
}
else {
    delayBuffer_.copyFrom (channel, dpw, buffer, channel, 0, numSamples);    
}

and similar…

You can use the gain parameter to do the wet/dry ratio. I leave that to you to figure that out :wink:


#6

Have you tried using the LinearSmoothValue class?

At the beginning of the process loop, set the value you want for the delay length, then update the value while you are looping over individual samples. I think you will need to update the read/write positions for each sample if you want to avoid those artifacts.


#7

Awesome! Using this class to smooth the parameter changes helped remove a lot of the digital artifacts. I should probably still implement some kind of interpolation but this helped a lot.


#8

I’ve implemented both smooth variable changes and interpolation and I am still getting distorted artifacts. Again, any time the delay modulates, it starts distorting slightly, I can hear the flanging (interpolation) working but there’s always some distortion mixed in, am I missing something?

void DelayChain::applyDelay (AudioSampleBuffer& buffer,  const float delInSamples, int channel, const float dryMix, const float wetMix, const float feedback) {
    
    float fractionalDelay = delInSamples - (int)delInSamples;

    this->delayReadPosition_ = (int)(delayWritePosition_ - (delInSamples) + delayBufferLen_) % delayBufferLen_;
    
    const float* inputBuffer = buffer.getReadPointer (channel);
    float* outputBuffer = buffer.getWritePointer(channel);

    float* delayBuffer = this->delayBuffer_.getWritePointer (channel);

    for (int i = 0; i < wetBuffer_.getNumSamples(); ++i)
    {
        float xn = inputBuffer[i];
        float yn = delayBuffer[this->delayReadPosition_];
        int rIndex_1 = this->delayReadPosition_ - 1;
        if (rIndex_1 < 0) {
            rIndex_1 = delayBufferLen_ - 1;
        }
        float yn_1 = delayBuffer[rIndex_1];
        float interp = linInterp(0.0, 1.0, yn, yn_1, fractionalDelay);
        yn = interp;
        delayBuffer[this->delayWritePosition_] = xn + yn*feedback;
        outputBuffer[i] = wetMix*yn + dryMix*xn;
        ++this->delayReadPosition_ %= delayBufferLen_;
        ++this->delayWritePosition_ %= delayBufferLen_;
    }
}

void OutputCollinDelayAudioProcessor::processBlock (AudioSampleBuffer& buffer, MidiBuffer& midiMessages)
{
    feedbackSmooth.setValue(*parameters.getRawParameterValue ("feedback"));
    delLenSmoothL.setValue(*parameters.getRawParameterValue ("timeL"));
    delLenSmoothR.setValue(*parameters.getRawParameterValue ("timeR"));
    dryWetMixSmooth.setValue(*parameters.getRawParameterValue ("wetDry"));
    
    const float dryMix = (1.0-dryWetMixSmooth.getNextValue());
    const float wetMix = dryWetMixSmooth.getNextValue();
    const bool isLinked = *parameters.getRawParameterValue ("link") < 0.5f ? false : true;
    
    const bool isSynched = *parameters.getRawParameterValue ("tempoSynch") < 0.5f ? false : true;
    const float feedback = feedbackSmooth.getNextValue();
    AudioPlayHead::CurrentPositionInfo posInfo;
    updateBPM(posInfo);

    if (isLinked) {
        delLenSmoothR = delLenSmoothL;
    }
    float delInSamplesL;
    float delInSamplesR;

    float currentTempDenom = 4.0;
    
    float beatsPerMilliSecond = 60000.0/currentBPM;
    float beatsPerMilliSecondNorm = beatsPerMilliSecond/1000.0;

    if (!isSynched) {
        delInSamplesL = delLenSmoothL.getNextValue() * getSampleRate();
        delInSamplesR = delLenSmoothR.getNextValue() * getSampleRate();

    } else {
        delInSamplesR = beatsPerMilliSecondNorm * getSampleRate();
        delInSamplesL = delInSamplesR;
    }
    
    for (int channel = 0; channel < getTotalNumOutputChannels(); ++channel)
    {
        if (channel == 0) {
            leftDelayChain.applyDelay(buffer, delInSamplesL, channel, dryMix, wetMix, feedback);
        } else if (channel == 1) {
            rightDelayChain.applyDelay(buffer, delInSamplesR, channel, dryMix, wetMix, feedback);
        }
    }
    meterSource.measureBlock (buffer);
}

#9

Ok - the code is a bit dense to read and it’s too early for me.

But it looks like you are only changing the delay time every block. So it probably skips around a bit. Try calling getNextValue() every sample instead?


#10

Incidentally this is the delay code I’ve been using for a few different purposes. All feedback welcome…

/** A basic delay abstraction, with linear interpolation for the reads. */
class SingleDelay
{
public:
	SingleDelay(int bsize)
		:
		buf(1, nextPowerOfTwo(bsize)),
		mask(nextPowerOfTwo(bsize) - 1),
		ptr(mask)
	{
		buf.clear();
		data = buf.getWritePointer(0);
	}

	~SingleDelay()
	{}

	/** Return the value for a fractional delay time in samples. */

	float geti(float delayTime) const
	{
		const int   dInt = delayTime;
		const float dFrac = delayTime - float(dInt);

		const int dlyIndex = (ptr + dInt) & mask;

		const float a = data[dlyIndex];
		const float b = data[(dlyIndex + 1) & mask];

		return a * (1.0f - dFrac) + b * dFrac;
	}
	/** Store the sample. Do this before returning delay taps
	for an accurate timing.

	@note ptr points to the sample just written. So when we read later
	with a delay time of zero we actually get a sample!
	*/
	void put(float inSignal)
	{
		data[mask & --ptr] = inSignal;
	}

private:
	float* data;
	AudioSampleBuffer buf;
	int mask;
	int ptr;
};

#11

I suppose the size of the array is a power of 2? It seems fine indeed for a variable delay line. For a fixed delay line, I would remove the interpolation, the difference is not high enough that a user would notice IMHO.


#12

Agreed I think - though I keep meaning to do more testing … the interpolation is relatively expensive. Some plugins i have use a get(…) function rather than geti().

And yes- the buffer is a power of 2 - this is forced at the constructor


#13

I suppose the cast to int is the most fragile part, you need to trust that you get a floor rounding and not a nearest (as you would get from lround). Too bad the standard doesn’t provide these :confused:


#14

Bingo! :slight_smile:


#15

Got it working then?


#16

I’m not sure there’s any advantage in using an AudioBuffer object in there either… probably some raw heap would do.


#17

Yes, got it working. Before I was incrementing the read position by 1 after processing each block, now at the end of each block I am updating the read position by getting the next smoothed value and adjusting it according to my write value (just as I do before the start of the loop). works great now!

incidentally, I agree about using the AudiBuffer object, I switched it out to a float * last night.


#18

You’re also doing channel by channel processing. I suspect (untested) that it’s a lot easier to do sample by sample and then do the channels in an inner loop for each sample. That way you can reuse coefficents, e.g. after smoothing, for each channel, e.g.:

for (int sample = 0 ... numSamples)
{
  auto delayTime = delay.getNextValue()
  for (int channel = 0 ... numChannels)
  {
   .... delay code ... 
  }
}

#19

I was just thinking about that. Since I have been using independent time delays for the left and right channels it hasn’t come up, but that makes sense with regards to the feedback and dry/wet mix. thanks for the suggestion!


#20

Yes and no. You should do it per parameter, per channel, and then per sample. For instance, if you are smoothing your parameters, use a window of 32 samples. This way, you have a fixed delay in time, which means vectorization, and this will speed up your code by quite a factor.