LagrangeInterpolator with a changing resample ratio

Hello all. Perhaps one of you can help guide me in the right direction here.

I’m attempting to integrate interpolation into a sample by sample rendering of a source file (WAV) but it seems as though the interpolator must be used in a block by block scenario. Rendering the block is problematic for me because I am handling real-time changes to the pitch and potentially need to update the ratio one or more times within any given block.

My current implementation produces corrupted samples with very loud values.

void SampleLayer_v3::render(AudioBuffer<float>& hostBuffer, int blockPosition, juce::int64 playbackPosition, float gainAdjustment)
{
    auto resamplingRatio = (source->getSampleRate() / resampleRate) * pitchRatio.getNextValue();
    auto gainFactor = cc07Gain.getNextValue() * cc11Gain.getNextValue() * gainAdjustment;
    if (gainFactor <= 0)
    {
        return;
    }
    auto sourcePosition = startOffset + playbackPosition;
    auto samplesToInterpolate = static_cast<int>(std::trunc(resamplingRatio + 1.0));
    if (sourcePosition + samplesToInterpolate > source->getLength())
    {
        return;
    }
    AudioBuffer<float> inputBuffer(source->getNumChannels(), samplesToInterpolate);
    AudioBuffer<float> outputBuffer(hostBuffer.getNumChannels(), 1);
    source->read(&inputBuffer, 0, samplesToInterpolate, sourcePosition);
    for (int chIdx = 0; chIdx < std::min(NumChannels, hostBuffer.getNumChannels()); chIdx++)
    {
        auto input = inputBuffer.getReadPointer(chIdx);
        auto output = outputBuffer.getWritePointer(chIdx);
        auto& resampler = (chIdx == 0) ? resampler_L : resampler_R;
        resampler.processAdding(resamplingRatio, input, output, 1, gainFactor);
        hostBuffer.addSample(chIdx, blockPosition, outputBuffer.getSample(chIdx, 0));
    }
}

Any obvious flaws? Suggestions for how to do this by the sample instead of by the block? Or must I limit myself to one pitch update per block and cringe at the stair-stepping artifacts I presume that I’ll encounter…?

Thanks in advance for any and all assistance.

Lagrange Interpolation needs 4 values, so basically you need a small 3 sample buffer and calculate stuff on the fly. So you should drop the resampler class and write your own sample by sample algorithm using just the lagrange interpolation.

1 Like

Thanks for your advice! I’ll dig into the implementation details of the Interpolator class and do as you suggest.

if you’re just doing the typical slowdown / speedup style pitch shifting control, then all you really want to do is scrub the samples at different speeds and then interpolate from your current position to get your output samples.

So if you wanted to read the sample at your current pitch:

phase += 1 as you scrub through the sample, if you want to go at 12 semitones down, phase += 0.5 as you scrub through the sample.

Then you can use:

float semitone_to_phase_delta(float inSemitoneValue)
{
    return pow(2.0, inSemitoneValue/12.0);
}

to get the phase delta for a given semitone shift you’d like to apply.

and any type of interpolation from your given position to generate your output.

that gets you your correctly tuned audio information at the sample rate of the sample itself, then you can use the interpolator across the entire block you’ve generated to resample it to the samplerate of the host

1 Like

You probably want to apply oversampling first to avoid aliasing too :wink:

Thanks guys. I had originally tried linear interpolation, as per one of the Juce examples, which uses the formula suggested by @jakemumu to re-pitch the sample. It did not occur to me to separate the processing for the host vs source sample rates from the pitch shifting process. Thanks for pointing that out! Also thanks to @gustav-scholda for the anti-aliasing suggestion. The reason I went down the path of exploring the Interpolation classes in the first place was due to a harsh sounding output, which I now suspect is the result of aliasing.

Back to the drawing board! But this time, armed with helpful hints… :slight_smile:

So, here is my updated render method. For now, there is no oversampling. I’m still trying to get the re-pitch working correctly, using linear interpolation (as I understood it from examples) in conjunction with a smoothed value member called pitchRatio.

void SampleLayer_v3::renderBatch(AudioBuffer<float>& hostBuffer, int blockPosition, juce::int64 playbackPosition, int numSamples, float gainAdjustment)
{
    gainFactor = cc07Gain.getCurrentValue() * cc11Gain.getCurrentValue() * gainAdjustment;

    if (gainFactor <= 0)
    {
        return;
    }
    
    auto processWithLinearInterpolation = [](double currentPosition, SampleSource_v3* source, float currentGain, float* sampleToRender) 
    {
        auto pos = static_cast<int>(currentPosition);
        auto nextPos = pos + 1;
        float sourceCurrent[NumChannels];
        float sourceNext[NumChannels];
        jassert(source != nullptr);
        source->getSample(pos, sourceCurrent);
        source->getSample(nextPos, sourceNext);
        auto alpha = (float)(currentPosition - pos);
        auto invAlpha = 1.0f - alpha;
        auto L = currentGain * ((sourceCurrent[0] * invAlpha) + (sourceNext[0] * alpha));
        auto R = currentGain * ((sourceCurrent[1] * invAlpha) + (sourceNext[1] * alpha));
        jassert(sizeof(sampleToRender) == sizeof(float) * NumChannels);
        sampleToRender[0] = L;
        sampleToRender[1] = R;
    };

    auto sourcePosition = startOffset + playbackPosition;
    auto readLimit = static_cast<int>(source->getLength() - sourcePosition) - 1;
    auto interpolationThreshold = static_cast<int>(std::round(pitchRatio.getCurrentValue() * numSamples));
    auto samplesToRead = std::min(readLimit, std::max(interpolationThreshold, numSamples));
    
    AudioBuffer<float> processingBuffer;
    processingBuffer.setSize(source->getNumChannels(), samplesToRead);
    processingBuffer.clear();

    auto currentPosition = static_cast<double>(sourcePosition);
    for (int bufferIdx = 0; bufferIdx < samplesToRead; bufferIdx++)
    {
        gainFactor = cc07Gain.getNextValue() * cc11Gain.getNextValue() * gainAdjustment;
        float sampleToRender[NumChannels] = {};
        processWithLinearInterpolation(currentPosition, source, gainFactor, sampleToRender);
        processingBuffer.addSample(0, bufferIdx, sampleToRender[0]);
        processingBuffer.addSample(1, bufferIdx, sampleToRender[1]);
        currentPosition += pitchRatio.getNextValue();
    }

    for (int chIdx = 0; chIdx < std::min(NumChannels, hostBuffer.getNumChannels()); chIdx++)
    {
        hostBuffer.addFrom(chIdx, blockPosition, processingBuffer.getReadPointer(chIdx), numSamples);
    }
}

So far, I seem to get back my original (source) pitch, even when pitchRatio.getNextSample() returns a value greater than 1.0 in the call stack. I’ve tried “batch” sizes of 4, 8, and 16 samples. The higher I attempt to shift the pitch (midi note up to one octave above source root note) the more strange artifacts I hear, but the fundamental pitch remains the same.

What am I missing here?

UPDATE: It seems I wasn’t returning the samples actually read back to the player so it could update its playback position accordingly. Thus, I wasn’t actually advancing through the samples at the speed that I thought I had been. After making the tweak, I have the correct pitch change. Now I need to look into the Oversampling to get rid of that harsh aliasing sound. Thanks again for all who took the time to read and/or respond.