Should I use ResamplingAudioSource for my sampler? [Solved]

Hi,

I want to implement some better resampling for my sampler than the linear interpolation used in the default JUCE Sampler (the one implemented in SamplerVoice::renderNextBlock(…)).

The library I currently work with is recorded in 44.1 kHz and changing the sampling rate to a multiple of that frequency sounds nice with linear interpolation (88.2 kHz for example). But switching to 48 kHz, or 96 kHz, etc. crunches the sound quite badly.

I saw the ResamplingAudioSource, but I am unsure if this is what I should be using (wrapping the sampler into an AudioSource node and sending it to ResamplingAudioSource)?

What confuses me - why is there a need for the linear interpolation within the SamplerVoice::renderNextBlock if ultimately I need to use the ResamplingAudioSource?

Thanks in advance.

I’ve tried CatmullRomInterpolator and LagrangeInterpolator, having 1 per audio channel in every voice (2 per voice for stereo) and I call reset on all of voice’s interpolators on startNote (since stopNote may tail off).

There is a very noticeable drop in quality.

// this is correct right?
const double speedRatio = sampleFileSampleRate / hostSampleRate;
// (transposing is disabled so I don't care for changing the speed of a sample)

Update:
Increasing the buffer size of the host, improves the quality.

The quality drop is still perceptible, even with the largest buffer size on my machine (2016 samples). But if it was perfect, this isn’t a solution - keeping the buffer so high.

What am I doing wrong?

The only thing I can think of now is the number of input samples I enter is different from the number I want the interpolator to produce - I get a pitchRatio * blockSize. Where (this code is from the voice):

pitchRatio = pow (2.0, (midiNoteNumber - sound->midiRootNote) / 12.0)
    * sound->sourceSampleRate / getSampleRate ();

Update2:
I found out an error in my calculations which was accumulating faster with lower buffer size - so changing the buffer size, doesn’t produce (at least not perceptible) difference in the audio, any more.

Quality remains affected quite badly (the quality is now constant with any type of buffer size setting, but still unsatisfying).

I am not switching to ResamplingAudioSource, because even though no one has responded yet, I am 90% sure I shouldn’t be using it in my case, since I would still need resampling if playing with changed pitch (speed), and that has to happen on voice rendering level. So it doesn’t make sense to have resampling in 2 places, with 2 different qualities.

Update3:
OK, I’ve been digging around the forum history and found this: LagrangeAudioSource?

This makes me think I should have resampling in both places - a simpler one (like the Lagrangian) for speed change inside the sampler voice and a more sophisticated one fed with the entire rendering of the sampler (the ResamplingAudioSource).

Is this what I’m supposed to do? Anyone…

1 Like

Solved - I am resampling on both voice level (to resample when transposing is being made - playing the recorded sample at different speed) and on Sampler level.

On voice level I used 1 CatmullRomInterpolator for each rendered channel.

On Sampler level I used ResamplingAudioSource. I wrapped the sampler in a SamplerAudioSource and connected it this way to the RAS - here is how I’ve done it: How do I wrap a Sampler/Synthesiser inside of an AudioSource?

I haven’t measured performance, but I assume the CRInterpolation is quite faster than the ResamplingAudioSource. This is my reasoning for resampling at 2 places - it will be too expensive to use the heavier RAS on every voice, so there I do the simpler and lighter interpolation, but on sampler level, higher-quality resampling is acceptable (and critically needed).

I am only unsure of how acceptable is the quality of the CRInterpolation since in my current scenario I don’t care about transposing… I guess I’ll find out.

I hope this is helpful to someone.

P.S.

  • I’ve noticed that on my last few topics there were no responses and I suspect I might be asking very basic question… If this is the case, please accept an apology for my ignorance. This probably isn’t an excuse, but I’ve studied DSP (and C++, JUCE, etc.) for something like 6 months, so I am still pretty new to all of this.

i don’t think there is any need to apologise, and certainly i doubt your questions are too basic.

If anything, they’re not basic enough.

basically only someone with good experience writing sample-based engines will have much idea…and actually, most plugins are not sample based. so you’ve probably just been unlucky that nobody with good experience saw your posts.

1 Like