Fractional delay line

Hi @daniel

I am trying to work out the best way to introduce a predelay on a reverb plugin I am developing.

I see you have a buffer module which, if I m not mistaken, seems to be an implementation of fractional delay here. Would you mind sharing an example of how to use this?

Please correct me if I am wrong, however it seems there would be a limitation as to how long the delay could be based on the number of samples that are passed into processBlock?

Thanks for sharing the open-source code.

Yes, it can be used as such, even though the aim was to have a delay line, that can adapt the delay time keeping a continuous signal (by speeding up or slowing down). The use case was a 5.1 microphone simulation with moving sound sources and the other use case an EchoPlex like tape-delay.

The maximum delay time is determined by calling setSize(), which sets the size of the circular buffer and the number of resamplers one for each channel.

Since all samples will be handled, there is only a pushBlock and a pullBlock. The addToPushedBlock is a bit kludgy, it was meant to add the feedback at the end of processing back to the input.

When pulling a block, you give it a delay time (and here is a shortcoming in terms of fractional delay, it is a time in int samples, but it doesn’t need to), and the signal will try to catch up/slow down within the limits given via setMaxResamplingFactor().

Hope that works for your use case.

EDIT: I regret a bit to have it brought up in this thread, because what the others are talking about is compensating for signal paths with half samples latencies. The code I wrote is really aimed for artistic plugins and will work well, but if you try to optimise a dsp graph of parallel signals for artefacts in the noise floor, stick to the advise of the other users :slight_smile:

Hi Daniel,

I am trying to build a chorus based on your TapeDelay and AudioProgrammer tutorial, how do I go about implementing interpolation using Lagrangeinterpolator and buffer processing instead of sample by sample? From my understanding, interpolation needs to be implemented while reading from delay buffer, so the “new” reading positions are determined before copying data from main buffer, is this the correct way of thinking?

Hey all! I wrote a RingBuffer class as part of my audio plugin Repitch which I hope some here will find useful. It uses a high-quality sinc interpolation for getting samples at fractional delays. Just be sure to include the juce::dsp module :+1:

struct RingBuffer : AudioSampleBuffer
{
  using AudioSampleBuffer::AudioSampleBuffer;

  void pushSample(int channel, float sample)
  {
    setSample(channel, writeIndex, sample);
    writeIndex++;
    writeIndex %= getNumSamples();
  }

  float getSampleAtDelay(int channel, float delay)
  {
    int numSamples = getNumSamples();
    float t = fmod(writeIndex - delay, numSamples);
    if (t<0) t += numSamples;

    const float* audio = getReadPointer(channel);

    // sinc interpolation
    // auto-vectorizes with -03 and -ffast-math

    int t_0 = int(t);
    float dt_0 = t - t_0;
    if (dt_0==0.) return audio[t_0];
    float sum = 0.;

    const int n = 16;

    int i0 = t_0-n+n/2;
    if (i0<0) i0 = i0/numSamples-1;
    else i0 /= numSamples;

    int im = (t_0+n/2)/numSamples;

    for (int i=i0; i<=im; ++i)
    {
      int offset = t_0-i*numSamples;

      int jmax = offset+n/2;
      int jmin = jmax-n;

      if(jmin<0) jmin = 0;
      jmin -= offset;

      if (jmax >= numSamples-1) jmax = numSamples-1;
      jmax -= offset;

      for (int j=jmin; j<=jmax; ++j)
        sum += float(2*(j&1)-1) * audio[offset+j] / (j-dt_0);
    }

    return sum * sinpi[dt_0];
  }

private:
  int writeIndex = 0;
  dsp::LookupTableTransform<float> sinpi {[] (float x) { return sin(M_PI*x)/M_PI;}, 0, 1, 512};
};
5 Likes

A fractional delay line is available as part of JUCE 6.

2 Likes