The quest for better Sampler Interpolation



Hello Everyone!

My first month with Juce has been great so far. I have question: I’m building the basics to build a little sampler engine in Juce. These days I’m trying to improve the linear interpolation function in the SamplerVoice class. First, it can be improved by using some of the curves on this paper:

This gives great results for pitchshifting down a note (usually 4 points are fine).

For pitchshifting up, the problem is different: due to the increasing resolution linear interpolation can be fine, but any content in the wav file that exceeds the nyqvist frequency (samplerate/2) will produce the most horrible artifacts due to the harmonics folding up and re-appearing as aliasing. Can just a pitch-dependant FIR filter on this interpolated sample solve all these problems? Is it not too heavy (considering that a sampler could play dozens of voices)?

Usually Sinc interpolation is seen as the best way to achieve a perfect quality interpolation. For what I’ve seen it seems suited the best for offline resampling and that’s why there’s plenty of material about this issue. I also know that any resampling algorhithm could be adapted for this issue, but it seems to me like using an helicopter to buy some groceries at the store next home.

Anyone got this issue solved or know some resource to look for this specific problem? (maybe code :stuck_out_tongue:)


I am using the resampler class from Cockos WDL to do my resampling, it has various algorithms with tweakable parameters included :

You don’t need to include/compile the whole library to be able to use the resampler, the following files are enough :


For JUCE based code, it has the downside that it uses interleaved audio buffers for stereo/multichannel, while JUCE uses split buffers. (It also uses 64 bit floating point doubles as default, but that can be changed to 32 bit floats with a preprocessor define.)


Thank you so much! This seems like a great library! I’ll post my results when I’ll have it. It’s crazy but yesterday I spent the whole day checking anything I could find on github but I didn’t know about this library!


I have successfully implemented the WDL Resampler in my SamplerVoice code. I process the needed samples from the file and get some resampled outputs that perfectly fit the process block. Then I can scan the buffer a second time to apply any kind of volume modulation. For now, to process the right and left buffer I just double the resamplers so each resampler work on a single channel. Everything fine except for 2 things:

  1. A quick scan through the class code shows that the WDL resampler code does a lot of memory operations, which is a big NO in the audiothread
  2. Since the resampling works using a FIR or IIR filer, and since I’m basically flushing the data at each block, the first samples are like delayed/smeared by the filter impulse response. The result is a much lower volume than the original interpolation and a possible wobbling the output on larger buffer sizes.

That’s why I’ll try to implement Laurent De Soras resampler

I could prepare the MipMaps when in my custom SamplerSound class and try to use its resampler to see what I will get. Alternatively the Zita algorhythm could be nice to implement.

Fun thing: by scanning the whole internet for a solution, I ended on a kvraudio thread where me in the past (paoling) replied to the problem I have now :smiley:

I still think that, even if a bit naive, my old solution is not so unreasonable and possibly it’s very fast.