Hello I have been trying to make a variable delay line in juice but for some reason I get artifacts when I change the delay time. I added an interpolation method and smoothed the incoming values but that didn’t seem to work. Please help
//Init Before Software Starts
void JaneDelay::init(float sampleRate)
{
//Set the sample Rate
SampleRate = sampleRate;
//Set the Max delay Size
SIZE = MAXDELAY * sampleRate;
//Create the Delay Buffer
delayBuffer = new float[SIZE];
setDelay(0.25f, 0.0f, 0.0f);
}
void JaneDelay::setDelay(float time, float width, float mod)
{
//Set time == samplerate
time *= SampleRate;
//Make Sure Time Isn't greater than max size
Time = time < SIZE ? time : SIZE - 1;
//Set Width == to SampleRate
width *= SampleRate;
//Create Fraction Component
frac = time - (long)time;
}
void JaneDelay::process(float *inbuffer, int numSamples)
{
for (int i = 0; i < numSamples; i++)
{
delayBuffer[writePointer++] = inbuffer[i] + output * feedBack;
//Set Read Pointer
readPointer = writePointer - Time;
if (readPointer < 0)
readPointer += SIZE;
float a = delayBuffer[readPointer];
float b;
if (readPointer + 1 > SIZE)
b = delayBuffer[0];
else
b = delayBuffer[readPointer + 1];
output = a + (b - a) * frac;
inbuffer[i] = output;
if (writePointer >= SIZE)
writePointer -= SIZE;
}
}
Sorry for the block of text I just can’t figure out why this isn’t working. If you want me to post anymore of the code please let me know!
You get artefacts because you introduce discontinuities in the signal whenever you change the length of the delay time. I’m not sure what the best approach here is, but I guess smoothing the changes to the delay times might help. Although I fear it might only make things a little more bearable than offer a ‘fix’.
You could also quickly fade the output of the delay to zero just before you change the delay time. tbh, I’m not sure how commercial plugins do this.
jumping: if the time changed pull first with the old time and copy with fade out and then fetch again with the new time adding it to the signal with a fade in
resampling: you have a smoothed value of the actual delay. When the value is different from the actual delay you resample the samples so it fits the requested samples. That way the audio is squeezed (pitched) like with an tape-delay when the read head is moving
It depends on use case which one to choose or let the user decide.
you just have to apply a really strong lowpass filter on Time before going into that line and you’re done. make sure Time and readPointer are float values so that you can use interpolation instead of directly reading from delayBuffer with it.
also be careful with the line in your init function where you new float[SIZE] on the delayBuffer. prepareToPlay can be called a lot of times. it’s safer to just use a vector
Or rather a juce::AudioBuffer<float> which is designed for exactly that purpose and has all juce specific interfaces, which is not the case for the general purpose vector.
I just tested this out of pure curiosity, and it works very well. There is a slight detuning of the signal, but it’s a far sight better than the glitchy zipper noise you get without it
Thanks for the suggestion. I tried it and it worked really well, however this gave me an idea of how I could improve the code. I realized since the low pass filter is just smoothing the value like a capacitor I decided to try and implement a short function to help smooth out the values.
void JaneDelay::process(float *inbuffer, int numSamples)
{
for (int i = 0; i < numSamples; i++)
{
//(Most of the function hasn't changed I just added this and made reading and wring to the delay its own function)
//Create a local varible for what the target should be
float localTargetTime = Time;
//Create slew with change in the delay time
if (localTargetTime != currentTime)
{
float timeInc = (localTargetTime - currentTime) / (SampleRate);
currentTime += timeInc;
}
//Read from delay Line
Output = read(currentTime + lfo + 1.0f);
}
}
This approach has also worked really well and then you don’t need to fiddle with filter parameters to get it working.
The final thing I observed is that when I changed the interpolation method to use the current sample and the last sample it worked a lot better. I don’t really know why, I don’t think it should have made a difference in theory, maybe there was more I fixed but the code works now so we take those lol.
As it is written on the documentation for the juce::DelayLine
Note: If you intend to change the delay in real time, you may want to smooth changes to the delay systematically using either a ramp or a low-pass filter.
But also the ‘jumping’ approach suggested by @daniel should work!
For @daniel : might I ask you what you mean with this technique?
Is this just ramp/smoothing + lowpass on the time/samples delay variable? Or am I not understanding something?
Interesting, I always used some sort of smoothing for the read index and lagrange interpolation to read from the buffer without resizing. I guess it depends on how often your delay is modulated which one is better in performance, resizing or resampling on the fly.
The choice of smoothing has a big impact on how the modulated delay will sound, e.g. I had nice analog style results using a physics model with velocity and acceleration. If you imagine a tape delay, the choice of smoothing affects how your tape’s read head speeds up / slows down when changing delay time
resampling the whole buffer is kind of impossible once you start getting into per-sample delay length modulation and any decent sized delay line, source: my first crappy attempt to make a LFO modulated delay.
For OPs original question, don’t roll your own, just use dsp::DelayLine with pushSample and popSample methods and smooth the delay time parameter, it does exactly what you want.
I know that there are way easier ways of doing it but I’m really interested in the idea of not only learning dsp but creating devices using embedded systems. I’m sorta just using JUCE as a way to test these algorithms before putting them onto MCUs.