LagrangeInterpolator with fractional ratio

I am doing sample rate conversion on an audio stream. The input to the stream is at s1 sample rate and the output of the stream should be at s2 sample rate. The ratio to the LagrangeInterpolator is s2/s1. I am receiving a fixed number of samples periodically on the input. If the ratio is fractional, this will produce a varying number of samples on the output. But, the LagrangeInterpolator requires that I specify the number of output samples, instead of the number of input samples. Does this mean I have to be stateful and manage the “extra” sample that should occur every once in a while? Seems like the LagrangeInterpolator is already stateful and should manage this?
Ex: s1 = 44100, s2 = 48000. I am getting, say, 100 samples on the input periodically. Shouldn’t the interface be “here is 100 samples and an oversized output buffer, let me know how many actual output samples you give me?”.

Yes it is. If your stream is interrupted, you should call reset() to reset the 4 last samples, that make the state for interpolation.

The JUCE pipeline model is a pulling model, so it will rather know, how many samples are needed. The process method returns the number of consumed samples. Because you usually don’t know the inter sample position, you cannot exactly predict the number of needed samples.

That’s the reason, why there is an overload, where you can limit the number of samples. That way you avoid to “overread” your input array. It will be filled up with silence, or if you supply the “wrapAround” parameter, it jumps n samples back, which allows to feed from a circular buffer.

Yes it is. If your stream is interrupted, you should call reset() to reset the 4 last samples, that make the state for interpolation.

OK, I understand the reset function.

Let’s expand the example. I have a continuous stream of audio coming in 100 sample chunks. Since I have to specify the output number of samples, I calculate I need 100 * (48000 / 44100) = 108.84 samples in the output buffer. I can ask for 108, and my input buffer of 100 should cover that. So, I move on using asking for 108 every iteration and I get my output samples. But now my actual output sample rate is 44100 * (108 / 100) = 47628. So, that won’t work. Similarly, I can’t use 109 every time. So, I have to sometimes ask for 108, and sometimes ask for 109, and when I ask for 109 I need to come up with another input sample because 100 won’t cover it. This is the problem.

I am using SRC today and it takes care of the above problem inside its state. I am exploring whether I can instead use the Juce LagrangeInterpolator, so maybe the answer is simply “no” for “streams with a fractional ratio”.

You are right, with the design of JUCE’s LagrangeInterpolator you cannot predict, how many samples will be consumed. There is always the +/- 1 uncertainty.
What you can do, is put the 100 samples into a fifo and allow one extra sample to be in the fifo, so it should always have enough samples to succeed.

It is just the design, that works from a consumer perspective, not from a producer perspective, so it’s just unlucky for your use case.

I remember playing with this a while ago. I found that converting a smooth sine wave from 80000 to 48000 Hz caused the odd glitchy spike, every block, even though i was NOT resetting it. I gave up and used something else, sorry I can’t give details.

same here, did my own implementation in the end…

I’ve heard someone else say that too. I went for a 2X oversampling and then a half-band filter which actually made sense for my synth, plus it turned out much faster.

@Daniel Found this use of LagrangeInterpolator for resampling.

Apparently it works if it is applied to the whole file not buffering. In the case of realtime are you suggesting using an AbstractFIFO?

Many thanks in advance.

Hey hey, I just spend a whole day implementing lagrange interpolator in my SynthesiserVoice subclass only to find out that it is not working with a stream of buffered chunks. XD T_T.

Might I ask how you guys solved this problem? Is there a way to make it work? Or what is a possible solution / alternative to this?

(I am really looking for a solution that can keep working with a stream (not total buffer beforehand))

Thank you!

EDIT: (I made it work!!!)
But it can be cpu intensive (although I don’t have any issues for now)

The trick is to preprocess the input buffer by shifting all the values (using the linear interpolation algorithm) by the fractional portion of the input-sample index:

    auto& data = *playingSound->data;
    const float* const inL = data.getReadPointer (0);
    const float* const inR = data.getNumChannels() > 1 ? data.getReadPointer (1) : nullptr;
    
    int recalculationBufferSize = numSamples * 8; //support going up to 4 octaves for now...
    AudioBuffer<float> recalculatedIn = AudioBuffer<float>(2, recalculationBufferSize);
    recalculatedIn.clear();
    float* recalculatedInW_L = recalculatedIn.getWritePointer (0, 0);
    float* recalculatedInW_R = recalculatedIn.getWritePointer (1, 0);
    const float* recalculatedInR_L = recalculatedIn.getReadPointer (0);
    const float* recalculatedInR_R = recalculatedIn.getReadPointer (1);

    //recalculate values in buffer to account for fractional index
    auto pos = (int) std::floor(sourceSamplePosition);
    auto alpha = (float) (sourceSamplePosition - pos);
    auto invAlpha = 1.0f - alpha;
    
    for (int idx=0; idx<recalculatedIn.getNumSamples(); idx++)
    {
        float l = (inL[pos + idx] * invAlpha + inL[pos + (idx + 1)] * alpha);
        float r = (inR != nullptr) ? (inR[pos + idx] * invAlpha + inR[pos + (idx + 1)] * alpha) : l;
        *recalculatedInW_L++ = l;
        *recalculatedInW_R++ = r;
    }
    
    //final buffer
    AudioBuffer<float> tempOutput = AudioBuffer<float>(outputBuffer.getNumChannels(), numSamples);
    tempOutput.clear();
    float* tempOutputL = tempOutput.getWritePointer (0, 0);
    float* tempOutputR = tempOutput.getWritePointer (1, 0);
    
    //resample using lagrange interpolator
    lagrangeResamplerL.process(pitchRatio, recalculatedInR_L, tempOutputL, numSamples);
    lagrangeResamplerR.process(pitchRatio, recalculatedInR_R, tempOutputR, numSamples);

    //add final buffer to output buffer
    outputBuffer.addFrom(0, startSample, tempOutput, 0, 0, numSamples);
    outputBuffer.addFrom(1, startSample, tempOutput, 1, 0, numSamples);
    
    sourceSamplePosition += (numSamples * pitchRatio);

Nice work. You are more dedicated than I was :). There was a separate post about the CatmullRomInterpolator, where a guy had created that and eventually it was added to Juce. There was talk in that thread about the push model being preferable over the pull model because push works for both files and streams. It sounded like that was going to result in a push model for CatmullRomInterpolator, but it doesn’t appear that way. I’m mostly a Juce fanboy, but this is one of the few aspects that I’m salty about. When they added CatmullRomInterpolator to Juce, they again set it up with a pull model that presumably requires extra effort for streams, though you seem to have an elegant solution to that extra effort.

If you are processing a real time stream and cpu is any concern…I ended up using this, designed for that specific situation:


All 3rd party solutions that I have come across use a push model and work fine for streams.
1 Like

Thank you! I was almost going to give up, took a nap and when I woke up I saw the solution before my eyes. Amazing feeling. Hahahaha.

I’m trying to understand the pull versus push difference. Would push mean the method would work like: this is the size of my inputbuffer and I give you the ratio so now give me the resampled output-buffer (with an unknown size)… ? If that’s the case, the problem of the fraction persists no? The output-buffer size will be fractionally offset, so in the end you would end up having to do the recalculating of offset values in the buffer again, but this time on the output-buffer… Or maybe I’m missing a point hehe.

My cpu concern comes from the fact that I’m doing at least double (or depending on the amount of octaves going up supported, this factor) array calculations in the nextRenderBlock as opposed to numSamples (as in the original SamplerVoice). But then again I have not really seen any CPU issues on my 2020MacBookPro… But thanks for the link I will check it out.

Also: I’m a juce fanboy as well, that’s why I wanted to solve this, so I can make use of all the different juce interpolators. I will make a dropdown in my plugin where I make them all available as an option :slight_smile: (simple switch case in previous code)