Delay that repitch the sound when change delay time

Hi, this is the first question in this forum and I’ve been developing plugins with Juce for a few months.
I’m currently developing a plugin that tries to emulate the functionality of the vintage Korg SDD-1000 / Boss DE 200 delays. The peculiarity of these plugins is that they change the pitch when the delay time is changed. To be clear, Ableton’s Delay in Repitch mode also does the same thing.
Currently I wrote a delay that:

  • stores what arrives as input in a memory (buffer) as large as the maximum delay
  • Uses a write head to write to memory and a read head to write to output and rewrite to feedback (at write position).
    The delay time is therefore given by the distance between the read and write head (in samples).
    The problem is that when I change the delay time I have a repitch effect only if it is changed very slowly or with a very high smoothing time (which however implies delays). Furthermore, when the delay time is changed, not all the samples are written in the memory buffer which are then “lost” forever and when you return to the starting delay time the signal is no longer the same (in the delays mentioned above when you go back to the starting delay the signal is exactly the same).
    I hope I was clear, if you need more information just ask.
    Thanks in advance!

for example:
if we use a 250mS delay, the playbackhead can pitch up 2 times, making the echo length 125. no data loss.
if we however do the reverse, and do a octave pitch down: the echo length will max out at 250mS, using only half of the input samples.
Is that the problem?

Thank you for your reply!
Maybe I explained bad the “data loss” problem.
The problem happen when I change fast (not so fast) the delay time or when the smoothing time is low because I think it also write the sweep in the feedback, so the original data is lost and this happen every time I change the delay time.
To avoid this I have to change the delay time VEERY slow or using a HIGH smoothing time (like 2 seconds) that are unacceptable.
The problem is heard less when I change from higher to lower delay time (e.g from 256 to 128 ms) because the read head just read less sample (but also write less sample in the feedback), instead when I change from lower to higher (es. from 128 to 256 ms) the read head can’t read the “original” samples (the allpass interpolation doesn’t recreate al the original sample.
Just to give an audio example:
What the delay should do: Korg Sdd 1000 Demo - YouTube (from 11:00)

in case you wanna google for more resources. the structure that you describe, where you read and write to a buffer in which the readhead is constantly following the writehead is called a ring buffer, or a circular buffer.

apart from that i can not help you with your problem. what you describe sounds like it should work perfectly. just update the delay time not per block, but per sample, with good interpolation (like cubic hermite spline, or oversampled with lerp) and it should sound smooth. experiment with different ways to transition from one delay time to another. for example simply lerping, or with a lowpass, or with a steeper lowpass, or with a fraction of a sine wave etc

Thank you so much! I will try to use different interpolation types and get back here.
Sorry for the bad question but what do you mean with “oversampled with lerp”?

oversampling means you upsample before the the processing and downsample after it in order to let the process run at a higher sample rate. in a vibrato (delay with constantly moving delaytime) this reduces the potential for sidelobes in the spectrum. in practice that means it just sounds less grainy. lerp means linear interpolation and i meant lerp on the readhead. typically lerping the readhead sounds bad but i found when oversampling 4x a delay lerping and splines just sound pretty much the same

Oh great! Do you suggest to use some Juce classes to oversampling or should I do it myself?

juce’ oversampler works fine

Hi! Your delay is effectively continuously resampling the signal at variable rate. Changing the delay time implies that the sampling rate momentarily changes while you move the position of the read head either closer towards the write head (downsampling) or away from the write head (upsampling). It’s the same as playing back a stream of samples at different rate. To make this sound good, you can probably get good enough results with interpolation, like catmull rom, if you stay within 2x and 0.5x. It gets more challenging if you go beyond that interval, repeating samples or skipping samples.
There are different approaches to improve this:
Like suggested, you can simply oversample the whole process. This is expensive in terms of CPU load, and how much improvement you get depends on the oversampling factor. It is fairly easy to implement, though. I’d still suggest to use a better interpolator than linear interpolation in that case too.
For better results that are closer to a real tape delay, you could try to implement a variable rate resampler that uses polyphase FIR filters (one for filtering, one for reconstruction). I don’t think JUCE has one out of the box, but you can find the building blocks.
In any case, there will always be a tradeoff between quality and the maximum “drift” speed of your read head.
From my personal experience, if your only goal is to not have it sound “scratchy” when adjusting the delay time too fast, low pass filtering (simple exponential smoothing) the read head position and catmull rom interpolation for reconstruction is sufficient for most material.

Thank you for your reply!
Ok I will try oversample or different interpolation and get back.
Anyway just to let you know, I’m using ValueSmoothingTypes::Linear for the delay time parameter and allpass interpolation for the fractional delay.
Here sound examples of what it should do and what it does:
Sound Examples.zip (4.8 MB)