Small optimization in ResamplingAudioSource


The canonical formula for linear interpolation usually goes like this:

v = t * v1 + (1 - t) * v0

Where v0 = original value, v1 = new value, and t = the interpolating parameter.

A simple reorganization of terms can eliminate one multiplication:

v = v0 + t * (v1 - v0)

This can be applied to ResamplingAudioSource


const float alpha = (float) subSampleOffset;
const float invAlpha = 1.0f - alpha;

for (int channel = 0; channel < channelsToProcess; ++channel)
    *destBuffers[channel]++ = srcBuffers[channel][bufferPos] +
                              alpha * (srcBuffers[channel][nextPos] - srcBuffers[channel][bufferPos]);

This also works for pixel colour component calculations but juce_PixelFormats.h already incorporates this formula.

It may also be desirable to swap the inner and outer loops. By this I mean loop first on channels, and then on samples, for the reason that this could provide better cache line performance. Such a rewrite could also reduce the number of registers used, and eliminate the multiplication implicit in two-dimensional array access ([channel][bufferPos]) if a local variable was used to store the current position within the channel data (it could simply be incremented instead of using an array access).


Thanks, yes, that’s a slightly neater way of writing it, though I doubt if it’d make much measurable difference.

As for swapping the inner and outer loops… My instinct would be that it’d make things worse, because it’d run quite a lot more code that way, so would need to have a big caching improvement to make up for that extra work. And I’d guess that for normal sized buffers, the whole thing would probably be in the cache anyway… (Of course there’s no way to tell for sure these days without running a test and measuring it)