Hmm, seems like I am bad at explaining, sorry for that…
The delay time multiplied by the sampleRate defines, how much the readPos has to fall behind the writePos.
Normally I would push as many samples into the delay buffer as I pull out, a normal FIFO.
When the musician changes the delay time, the distance has to change as well. But is won’t happen, if I keep pulling the same amount of samples out of the buffer, as were pushed into before.
Lets say, one buffer is 1024 samples. And the player reduced the desired delay time by 1024 samples (by manipulating the time knob), the algorithm will pull 2048 samples and resample them to feed one block. The readPos advanced 1024 samples more than the writePos, hence the time delay is now 1024 samples shorter, just as desired.
Does that make sense?
My main problem is, that the LagrangeInterpolator has only the option to say how many samples to produce. But in my case I have situations, where I have limited number of input samples (because the buffer wraps around). I can only see as return value, how many it used (and if it needed more than available, which means reading invalid memory).
Can you explain that to me? I’m reading up now on averaging filter…