Circular Buffer and LagrangeInterpolator


#1

I am implementing a circular buffer and resample to feed back into the buffer.
So I can define how many samples I want to produce. But how can I limit the number of incoming samples to avoid over-reading?
I can try to converge using the result of LagrangeInterpolater::process(), but that seems not very stable nor performant.
Due to the state I can also not directly use the numOutputSamplesToProduce * speedRatio.

Any ideas? Thanks!


#2

Bumping, anybody?

I think without access to the private subSamplePos, I cannot predict this accurately. So if I have a limited number of samples (reaching the end of my circular buffer), I cannot compute, how many output samples I can create.
What would be the solution?

(Answers like “This is now much easier with the new dsp module, see xy…” would be accepted and appreciated too, just need any solution :slight_smile: )


#3

I’m not sure I understand your question, maybe your main issue is that you use the Lagrange interpolator without adding some latency to your process ? That’s mandatory when your output needs some future input samples to be defined, and every interpolator with an order higher than one introduce some latency for this reason


#4

Thanks Ivan, I try to explain it better.
What I want to achieve is, to add the incoming samples into an AudioBuffer as delay line. And then I want to read continuously from it while moving the playhead, (like an EchoPlex). Therefore I need to resample the reading.

// member variables 
int writePos;  // index where to append the next incoming block
int readPos;   // index, where to read the next block from
LagrangeInterpolator resampler;

// in processBlock
const int neededInputSamples = output.getNumSamples() * speedRatio;
if (readPos + neededInputSamples <= delay.getNumSamples()) {
    int used = resamples.process (speedRatio, 
                                  delay.getReadPointer (channel, readPos), 
                                  output.getWritePointer (channel),
                                 output.getNumSamples());
    jassert (used == neededInputSamples);  // problem: << this will probably fail depending on subSamplePosition?
    readPos += used;
}
else {
    const int firstHalfAvailable = delay.getNumSamples() - readPos;
    const int firstHalfProduced = firstHalf / speedRatio;
    int used = resamples.process (speedRatio, 
                                  delay.getReadPointer (channel, readPos), 
                                  output.getWritePointer (channel),
                                  firstHalfProduced);
    jassert (used == firstHalfAvailable);  // problem: << this will probably fail depending on subSamplePosition?
    resampler.process (speedRatio, 
                       delay.getReadPointer (channel), 
                       output.getWritePointer (channel, firstHalfProduced),
                       output.getNumSamples() - firstHalfProduced);
    readPos = neededInputSamples - firstHalfAvailable;
}

This may or may not work, as depending of the subSamplePosition, the firstHalfProduced will be inaccurate.

How can I solve this?

Thanks

EDIT: added some more details


#5

Hi daniel,

from your example it is not all clear what is supposed to do what. Could you share what types belong to resamples, readPos, etc? Ideally, the whole function (possibly processAudioBlock).
I did something some years back that resembles a chorus + delay line, so that could give you pointers: https://bitbucket.org/cascassette/chorusdelayline/src/.


#6

Hey Cas,
thanks for the link.
I have no problem, if I keep the playback speed of the delayed signal constant.
When the delay time is changed, I want to make the signal catching up or slowing down to adapt to the new delay time. I omitted the code, where the speedRatio is computed. It looks like this:

const int    currentDelay = (writePosition > readPosition ? writePosition - readPosition : writePosition + buffer.getNumSamples() - readPosition) - outputBuffer.getNumSamples();
const double factor       = std::min (std::max (static_cast<double>(delayTime - currentDelay) / outputBuffer.getNumSamples(),
                                                1.0 / maxResamplingFactor),
                                      maxResamplingFactor);

resamples is a copy typo from simplifying, it is meant to be a LagrangeInterpolator instance, (actually an array, one per channel)

The readPos and writePos are simply index pointers where the next buffer shall continue.

I think I’ll do a workaround for now, and just copy x number of samples to the end of the buffer and use the returned value of used samples.

How do you handle the changes of the delay time? Resampling or crossfade and jump?

Thanks,
Daniel


#7

I’m confused by what you mean by “resample” here. Are you looking to change the sample rate of something or not? Can’t you just smooth the delay time changes with a first-order averaging filter?


#8

Hmm, seems like I am bad at explaining, sorry for that…

The delay time multiplied by the sampleRate defines, how much the readPos has to fall behind the writePos.
Normally I would push as many samples into the delay buffer as I pull out, a normal FIFO.
When the musician changes the delay time, the distance has to change as well. But is won’t happen, if I keep pulling the same amount of samples out of the buffer, as were pushed into before.

Lets say, one buffer is 1024 samples. And the player reduced the desired delay time by 1024 samples (by manipulating the time knob), the algorithm will pull 2048 samples and resample them to feed one block. The readPos advanced 1024 samples more than the writePos, hence the time delay is now 1024 samples shorter, just as desired.

Does that make sense?

My main problem is, that the LagrangeInterpolator has only the option to say how many samples to produce. But in my case I have situations, where I have limited number of input samples (because the buffer wraps around). I can only see as return value, how many it used (and if it needed more than available, which means reading invalid memory).

Can you explain that to me? I’m reading up now on averaging filter…


#9

Is this is a purely buffer based processing issue and you’re trying to get sample-accurate delay changes without changing the delay time every sample period?

What I think you’re saying is that you’re going to read from the buffer at the first read index and the second read index at the same time, and your issue is deciding which buffer to use, or how to interpolate between the two to write back?

Can you explain that to me? I’m reading up now on averaging filter…

very rudimentary averaging filter:

avg = (1.0 / N) * (input - avg) + avg;

Where avg is a filter state variable. It’s a one pole IIR lowpass filter that approximates an N-point FIR moving average. Also called an “exponential” moving average. It’s a dirt cheap IIR that can be used to smooth parameter changes, like say switching between a delay of 1024 samples and 0. What I was confused about is whether you were doing sample or buffer based processing here, if you were doing sample based processing, you’re always pulling one sample out of the FIFO/delay line as you push one in, when the parameter change happens you just update the read index. The averaging is used to smooth out big parameter changes to get rid of pops.


#10

My issue is really, I store a delay line in a circular buffer (like using AbstractFIFO). And I want to read back with a variable speedFactor using an interpolator. A physical device doing that is e.g. an EchoPlex, with a fixed record head and a movable play head, that can move along the running tape. So this is not really a case for a convolution filter, but as I said rather a re-sampler.

I created a PR to allow the LagrangeInterpolator to limit the number of input samples:

I have no idea, how to achieve this without altering the original class. The parameter “available” should be mandatory, because otherwise the algorithm is allowed to read arbitrary number of samples (which can easily happen, if speedFactor is too high). But obviously it can’t be for backward compatibility.

So either way, I think this is a useful addition for several use-cases, so @jules, please let me know, if this can be added or if there is an alternative way.

Thank You all!


#11

Bump…
Does it make sense to limit the number of samples the interpolator can read? I understand, that checking each sample is a performance penalty, so if that limit is not needed, maybe it is worth having both versions available in parallel (but different in arguments).
Another option would be adding a method like
int getNumOutputForInput (double speedRatio, const int numInput) const

But somehow it should be possible to predict, how far the method will read from *in.

Thanks for feedback!


#12

Sorry, I didn’t have time to answer you properly, it’s on my to do list to try to do a delay using one of the JUCE interpolators, I’ll tell you what I get. Intuitively, I would just say that it doesn’t seem to be that clean to have to limit a number of incoming samples. For me it’s just a matter of handling the internal latency of the interpolator, but I’ll be able to tell you more when I did my homework.

In Spaceship Delay, what I did is either :

  1. At delay change, I have for a few ms a two taps delay with gains decreasing / increasing at last delay position / new delay position
  2. At delay change, I move the reading position from last delay position to new delay position at a given speed by filtering the current position delay value like it is a LFO (first order lowpass, and the speed is the inverse of the filter cutoff frequency)

And of course, I use fractional delay interpolators, most of the time Linear or Lagrange (custom class) just to handle the fact that the exact delay value in samples has no reason to be integer.

So basically, what I did there is almost the same thing than you, since I have the “tape/chorus” effect with (2) and an interpolator. But the main difference is that in my approach I do not process all the samples once in the circular buffer between the last position and the new position. Sometimes I process them more, and sometimes some samples are not processed. With the resampling approach, I would process every sample once I guess.


#13

Hi @jules and @IvanC,

I just updated the PR on github to match the latest develop. It contains now a fast LagrangeInterpolator::processUnchecked() and a bounds checking process(), that can optionally wrap around a buffer length…

Cheers!


#14

Thanks, we’ll take a look!


#15

Hey @jules, thanks for looking into it. Just pulled and recompiled, works like a charm.

(commit)