[SOLVED] Oversampling class - what's wrong here?

I’m trying to use the juce::Oversampling class, yet can’t get my head around why it isn’t working. I’ve looked at the example in DSPModulePluginDemo and various ones online.

My problem is that when I use the downsample function, everything becomes a big noisy mess.

void WavetableMorphEngine::processBlock(dsp::AudioBlock<float> &p_audio_block) {
	auto upsampled = m_oversampling->processSamplesUp(p_audio_block);

    //write sinewave onto upsampled block
	for (int sample = 0; sample < DSP_BLOCK_SIZE * m_oversampling_factor; ++sample) {
		upsampled.setSample(0, sample, OSCILLATOR_DEVELOPMENT_GAIN * sin(m_wt_pos * 2 * M_PI));

    //upsampled buffer is good
    //downsampled buffer is garbage...

I set up the Oversampling like this:

void WavetableMorphEngine::setSamplerateAndOversamplingExponent(float p_samplerate, int p_exp) {
	m_oversampling.reset(new dsp::Oversampling<float>(
	    2, p_exp, dsp::Oversampling<float>::FilterType::filterHalfBandPolyphaseIIR, USE_MAX_OVERSAMPLING_QUALITY));

Anyone seeing what I’m doing wrong?

Ok, I just figured it out! I miscalculated my m_oversampling_factor, which was actually the exponent of oversampling being used. Hence I expected the value to be 4, but it was 2, which lead to only half the buffer being filled with the sinewave.

It‘s probably safer to query the number of samples from the upsampled block directly, like
for (size_t sample = 0; sample < upsampled.getNumSamples(); ++sample)

DSP_BLOCK_SIZE seems like a constant to me. I guess you ensure a constant block size somewhere in your processing? I‘d try to write the processing code in a way that it does not depend on such constants. You can still decide to build a mechanism around your processing code to ensure it, but not depending on it but always taking the number of samples as reported by the buffer makes your code safer and future proof to be re-usable in the more common use cases where you can’t guarantee a constant block size

I am trying to code AVX, hence I am using a block size of 16 samples to do my processing. Right now I just choose the blocksize to be divisible by 16, but in the long run I plan on using a FIFO to guarantee my 16 samples (as discussed in another thread here).

I see :slight_smile: You should however keep in mind that dsp classes like they are implemented in JUCE tend to perform more efficiently with larger block sizes, so calling the whole graph with sample blocks of size 16 could actually perform worse for non AVX-optimised dsp functions.

Just as a thought, depending on how your signal chain looks like, it might or might not make sense to only use the FIFO around your custom dsp functions. In case they all work on the upsampled signal, you could e.g. always feed the whole block into the oversampler and slice the oversampled block into constant sized pieces. However, there are many factors and no one-fits-all answer, so happy AVX coding, this is always fun :wink:

1 Like

Yeah, those are certainly considerations to keep in mind. The big idea behind the 16 samples right now for me is modulation speed (It’s gonna be a synth). In my last project I rendered the entire project per sample, which of course is a big nono performance-wise. But at the same time, it allowed for per-sample modulation, so you could do things like FM via the modulation matrix. By rendering an entire buffer, these things would either not be possible or have a rather big delay.

So I’m trying to process everything based on the aforementioned DSP_BLOCK_SIZE, which I can then variate to hopefully find a good tradeoff between processing speed and quick modulation.

I hope this argument will be renamed by the JUCE team. Renaming an argument wont break any code…