Modulating Delay Time Upwards

I have a delay module, but modulating delay-time up sounds kind of harsh.
modulating time down gives the carateristic increasing-pitch kind of sound.

The delay is a fractional delay line, where I read from it after [delay_time_in_samples] samples.

Ok this seems not very surprising to me. When modulating up fastly, the read position will increase faster as the delay is moving thereby “travelling back in time through sound” which results in absolute garbage.

Now I had a few ideas, none satisfyig:

  1. Smooth the delay time
  2. Limit the increase of delay time to 1 sample / sample (this shifts the sound an exact octave which feels superweird)
  3. I thought about counting the samples I jumped over and averaging them (I haven’t implemented this yet).

Is there a common approach to this problem? I tried to modulate the delay time on a commercial synth, and the result was still a bit shaky…

Thanks for listening to my TedTalk

You could implement a bucket-brigade. This is how many “analogue” delays work, but it’s kind of half-analog-half-digital and leads to the typical pitch effects when changing the delay time.

A term I hear very often in combination with delays. What is it exactly though?
If possible, I would like to stay at the “pure digital” delay, as to not completely rewrite the module (it’s gotten quite big by now)

The bucket brigade is more or less a primitive sampler. Each bucket is one sample of memory. Typical analog devices have ~2000 buckets. These can be written/read at different speeds/sampling rates. When the delay time increases the “sampling rate” decreases and pitch drops and vice versa.

If you already have a digital delay then maybe you could implement this by writing/reading the data at different sample rates. Instead of changing the number of samples inside the buffer, you change the in/out sample-rates.

When doing it digitally it’s like a digital buffer of fixed size which is read/written at different rates. The technique does require a low pass filter to avoid heavy aliasing.

Whether it’s a good technique depends on the range of delay-time change you are expecting. The smaller the range, the better it works. It does work great for modulation-type effects.

1 Like

Looking at it from the other side, what the desired effect would be:

  • keeping the frequency constant:
    pushing the samples into a circular buffer and allowing the read head to jump. In this case you would have a certain number of samples crossfading, when the delay time (i.e. distance writeHead - readHead) changes

  • playing every sample (fractional delay):
    Squeezing more samples at reading, to catch up/fall behind to the desired read position. That involves resampling and gives an effect similar to the tape delays from the 60s (e.g. EchoPlex).
    In my implementation I set also a maximal factor for the resampling, which defines, how fast the read head will reach its set position

(which is the same @pflugshaupt mentioned, just viewed from a different perspective)

1 Like

I think what you describe is the simulation of a tape delay which has some (mechanical) speed limits. The bucket brigade has no speed limit.

The problem with bucket brigade is that it either lacks high freq content or wastes cpu in case input and output sampling rates differ a lot.

Ok, when you say “certain number of samples crossfading”, you mean I go over the skipped samples and somehow smooth things out? (like I mentioned with averaging in the original post)
I tried to implement this earlier but soon realized that it’s more complicated than I thought, because I’m using a ringbuffer, so the last sample position can be on the other side of the buffer. Furthermore if modulating the other direction, the swapped situation can happen, so I have to consider some situations. Definitely doable though.

As for the bitbucket / resampling approach @pflugshaupt, I’m not too keen on doing this, since the effect is basically done as is, the only remaining issue being the topic at hand. And resampling might be a topic for another time for me. I appreciate the input and explanations though! Thanks :slightly_smiling_face:

The idea is not to change the read pointer, when ever the changes arrive, but rather in processBlock first read from the old read position and fade out, then set the new read position, go back to the start of the block and add while fading in. That way the old signal can finish smoothly and the new starts smoothly, no clicks.

It involves a ring buffer, whose length defines the maximum delay.

Ok this would make sense for a single change, like turning a knob. But what about continuous modulation with an LFO? I can’t be constantly fading in and out or I would a SuperChoppyDelayLine(™)?

Oh ok, for a modulated delay time it sounds you want the resampling aproach, and you probably want to adapt the resampling ration not only each block but rather any sample or any 16 at least.

It shoud give you a vibrato like effect.

I would start simple and lerp the samples. You can also use the LarrangeInterpolator (I used it once to simulate time delay in a geometric scene with doppler effect, and it worked pretty well).

This will lead to the bucket brigade type of processing. Lerping is fine as long as the difference between the sampling rates is <= x2, it acts like a simple low-pass filter and to get nice results things should be adjusted every sample, otherwise the modulation won’t be smooth.

1 Like