Before rendering your synth you need to get a āup-sampledā audio block firstā¦ then do the render into it, then call processSamplesDown to return to the original block size (post filtering).
Remember that the synth samples rendering frequency needs to be divided by the oversampling rate, since it will get multiplied up on down-sampling.
In the case of rendering audio rather than processing incoming audio data, itās really unnecessary to up sample an empty buffer. Up-sampling is usually done with calling āprocessSamplesUpā in the oversampling class, but for my own purposes I added a new function in the JUCE oversampling class to just return an empty buffer (with size based on the oversampling rate) - can probably be optimised further but works and avoids extra processing of processSamplesUp function:
template
typename dsp::AudioBlock Oversampling::getUnprocessedUpsampleBlock (const dsp::AudioBlock& inputBlock) noexcept
{
jassert (! stages.isEmpty());
if (! isReady)
return {};
auto audioBlock = inputBlock;
for (auto* stage : stages)
{
audioBlock = stage->getProcessedSamples (audioBlock.getNumSamples() * stage->factor);
}
return audioBlock;
}
Before calling those functions for up/down sampling you need to set up the oversampling class instance . e.g: here setting up for 4 X oversampling
oversampling.reset (new dsp::Oversampling<float> (numChannels, 2, dsp::Oversampling<float>::filterHalfBandFIREquiripple, false));
and the oversampling class instance needs to have settings initialised when the system calls prepare to play:
void prepareToPlay(int sampleRate, int samplesPerBlock, int channels)
{
// Use this method as the place to do any pre-playback
// initialisation that you need..
dsp::ProcessSpec spec;
spec.sampleRate = sampleRate;
spec.maximumBlockSize = samplesPerBlock;
spec.numChannels = channels;
oversampling->reset();
oversampling->initProcessing (static_cast<size_t> (samplesPerBlock));
}
In terms of conversion from audio buffer to dsp::AudioBlock - hereās how I did it:
(where mTargetBuffer is defined as AudioBuffer& mTargetBuffer; (mWTsettings is a variable in my synth code that holds the target buffer pointers for generated sound insertion))
numSamples = targetNumSamples * DSP_OVERSAMPLING_RATE;
dsp::AudioBlock<float> targetBlock = dsp::AudioBlock<float>(mTargetBuffer);
dsp::AudioBlock<float> blockOut = overSampling->getOverSampleBuffer(targetBlock);
mWTsettings.bufferL = blockOut.getChannelPointer(0);
mWTsettings.bufferR = blockOut.getChannelPointer(1);
commonSoundGen(numSamples);
overSampling->downSample(targetBlock);
I created a simple helper/wrapper class to simplify things for myself (sorry I donāt know why the below code is not getting formatted into a single quoted block):
#ifndef DspOversampling_hpp
#define DspOversampling_hpp
#include āā¦/JuceLibraryCode/JuceHeader.hā
#define DSP_OVERSAMPLING_RATE 4
using namespace dsp;
class DspOverSampling
{
public:
DspOverSampling(int numChannels)
{
//setting a default rate of 4X oversampling (actual oversample rate = ratevalue X2)
oversampling.reset (new dsp::Oversampling<float> (numChannels, 2, dsp::Oversampling<float>::filterHalfBandFIREquiripple, false));
}
~DspOverSampling()
{
}
void prepareToPlay(int sampleRate, int samplesPerBlock, int channels)
{
// Use this method as the place to do any pre-playback
// initialisation that you need..
dsp::ProcessSpec spec;
spec.sampleRate = sampleRate;
spec.maximumBlockSize = samplesPerBlock;
spec.numChannels = channels;
oversampling->reset();
oversampling->initProcessing (static_cast<size_t> (samplesPerBlock));
}
inline dsp::AudioBlock<float> getOverSampleBuffer(dsp::ProcessContextReplacing<float> context)
{
return oversampling->getUnprocessedUpsampleBlock (context.getInputBlock()); // avoid any upsampling code... other than get audio block
}
inline void downSample(dsp::ProcessContextReplacing<float> context)
{
oversampling->processSamplesDown (context.getOutputBlock());
}
private:
std::unique_ptr<dsp::Oversampling<float>> oversampling;
};
#endif /* DspOversampling_hpp */