Using dsp::Oversampling on Juce::Synthesizer


#1

For my project i’d like to oversample the processing done by a Juce Synthesizer using the dsp:Oversampling class. This, in order to reduce slight aliasing artifacts from my polyBLEP oscillators.
The problem is that the Synthesizer class only takes AudioBuffers and not AudioBlocks. Is there a way around this problem without having to rewrite the Juce baseclasses?


#2

Why do you need it to take AudioBlocks?

Don’t you start with an AudioBuffer, then create an AudioBlock that you run through the Oversampler?

I haven’t used dsp::Oversampling, but any changes done by it should actually be at the original buffer, so you can then just use the buffer, can’t you?

Sorry if I have missed the point, here.

P.S.

  • perhaps you can share a simple example of the problem you are facing if I did not understand what the issue is?

#3

Obvious point. I’ll try that out!

Looking at the DspDemo from Juce, my understanding was that a new block must be created. That oversampled block is then to be processed, then downsampled.
But, looking at my own code, I simply downsample the original block and seems to work… D’oh!

template <typename FloatType>
void MonosynthPluginAudioProcessor::applyFilter (AudioBuffer<FloatType>& buffer, std::unique_ptr<LadderFilterBase> filter[], ScopedPointer<dsp::Oversampling<FloatType>>& oversamp)
{
    dsp::AudioBlock<FloatType> block (buffer);
    dsp::AudioBlock<FloatType> oversampledBlock;

    oversampledBlock = oversamp->processSamplesUp(block);
    
    FloatType* channelDataLeft  = oversampledBlock.getChannelPointer(0);
    FloatType* channelDataRight = oversampledBlock.getChannelPointer(1);
    
    int numSamples = (unsigned)oversampledBlock.getNumSamples();
    
    int stepSize = jmin(16, numSamples);
    
    int samplesLeftOver = numSamples;
    
    for (int step = 0; step < numSamples; step += stepSize)
    {
        
        <---SOME PARAMETER UPDATING FOR FILTERS--->

        filter[0]->Process(channelDataLeft, stepSize);
        filter[1]->Process(channelDataRight, stepSize);
        
        samplesLeftOver -= stepSize;
        
        channelDataLeft += stepSize;
        channelDataRight += stepSize;
    }
    
    oversamp->processSamplesDown(block);
}

#4

Ok, I somehow can’t simply do:

dsp::AudioBlock<FloatType> block (buffer);
oversamp->processSamplesUp(block);
    
    
FloatType* channelDataLeft  = block.getChannelPointer(0);
FloatType* channelDataRight = block.getChannelPointer(1);
    
int numSamples = (unsigned) block.getNumSamples();

<---MORE CODE--->

filter[0]->Process(channelDataLeft, stepSize);
filter[1]->Process(channelDataRight, stepSize);

<---MORE CODE--->

oversamp->processSamplesDown(block);

Because then, my signal isn’t processed by the filter.

I also can’t do:

dsp::AudioBlock<FloatType> block (buffer);
oversamp->processSamplesUp(block);
    
    
FloatType* channelDataLeft  = buffer.getWritePointer(0);
FloatType* channelDataRight = buffer.getWritePointer(1);
    
int numSamples = (unsigned) buffer.getNumSamples();

<---MORE CODE--->

filter[0]->Process(channelDataLeft, stepSize);
filter[1]->Process(channelDataRight, stepSize);

<---MORE CODE--->

oversamp->processSamplesDown(block);

I get the same, unfiltered signal. Presumably because the pointee’s address has changed by then.

So I get to my original question: I have to process the new ‘oversampled Block’ in my Synthesizer, but the base class doesn’t provide overloads to do that.
I either have to A) rewrite the Synthesizer class, or B) rewrite the Oversampling class. Neither which I’m looking forward to, nor willing to do…


#5

OK, so my assumption about the Oversampling doing the changes on the AudioBlock that it starts with was wrong. It only seems to be doing the changes to the original buffer, if you pass its AudioBlock to processSamplesDown (and that’s perfectly logical - sorry for misleading you). But you should work with the AudioBlock returned by processSamplesUp which should be larger until you want to resample it to the original SampleRate.

So, this piece of code that you first posted would be my approach, too:

template <typename FloatType>
void MonosynthPluginAudioProcessor::applyFilter (AudioBuffer<FloatType>& buffer, std::unique_ptr<LadderFilterBase> filter[], ScopedPointer<dsp::Oversampling<FloatType>>& oversamp)
{
    dsp::AudioBlock<FloatType> block (buffer);
    dsp::AudioBlock<FloatType> oversampledBlock;

    oversampledBlock = oversamp->processSamplesUp(block);
    
    FloatType* channelDataLeft  = oversampledBlock.getChannelPointer(0);
    FloatType* channelDataRight = oversampledBlock.getChannelPointer(1);
    
    int numSamples = (unsigned)oversampledBlock.getNumSamples();
    
    int stepSize = jmin(16, numSamples);
    
    int samplesLeftOver = numSamples;
    
    for (int step = 0; step < numSamples; step += stepSize)
    {
        
        <---SOME PARAMETER UPDATING FOR FILTERS--->

        filter[0]->Process(channelDataLeft, stepSize);
        filter[1]->Process(channelDataRight, stepSize);
        
        samplesLeftOver -= stepSize;
        
        channelDataLeft += stepSize;
        channelDataRight += stepSize;
    }
    
    oversamp->processSamplesDown(block);
}

This does work as expected for you, right?

So if I understood you correctly, the above works as expected, but you are trying to use the same approach and are just missing the API accepting AudioBlock elsewhere (I guess in SynthesiserVoice)?

If that is the issue - can you create an AudioSampleBuffer with the channels extracted from the oversampled AudioBlock and use it in the part of the processing chain which only takes buffers? AudioBuffer has an API for creating a buffer passing the actual data, but you have to be careful that this buffer doesn’t get resized in any way or it will reallocate the memory and this will likely mess things up on downsampling.

/** Creates a buffer using a pre-allocated block of memory.

    Note that if the buffer is resized or its number of channels is changed, it
    will re-allocate memory internally and copy the existing data to this new area,
    so it will then stop directly addressing this memory.

    @param dataToReferTo    a pre-allocated array containing pointers to the data
                            for each channel that should be used by this buffer. The
                            buffer will only refer to this memory, it won't try to delete
                            it when the buffer is deleted or resized.
    @param numChannelsToUse the number of channels to use - this must correspond to the
                            number of elements in the array passed in
    @param numSamples       the number of samples to use - this must correspond to the
                            size of the arrays passed in
*/
AudioBuffer (Type* const* dataToReferTo,
             int numChannelsToUse,
             int numSamples)

@IvanC wouldn’t it be reasonable that AudioBlock has an API for getting a pointer to an AudioBuffer? This way if something that doesn’t accept AudioBlock (as I assume is the current problem of @meneervermeer) needs to operate on it’s data, it can safely do so on a buffer, even if it needs to resize it?


#6

The idea of AudioBlock as I understand it, is that it is safe to create one on the fly in processBlock (i.e. on the audio thread). That’s the reason why the AudioBlock doesn’t allocate memory

Loophole: creating an AudioBlock with a HeapBlock, which allocates in the HeapBlock. That shouldn’t be allowed, or at least try to recycle allocated memory, if you give it a big enough preallocated HeapBlock. Current implementation deallocates and allocates the needed size (at least a few weeks back when I checked).

So for an AudioBlock to be created, there must be memory already being allocated, which is done using AudioBuffer.

There might be room for some convenience methods though…


#7

I tried to create an intermediate buffer, so to say using, copyFrom. But this lead to numerous crashes…
I’ll try your suggestion, but I think things will get messy.


#8

I’ve managed to oversample the Synthesizer class, but it’s a bit ugly. For testing I’m both oversampling the synth stage and filter stage so I have to use two oversampling engines. Otherwise the processing of the oversamplers get’s glitchy.

This is how I did it.

dsp::AudioBlock<FloatType> block (buffer);
dsp::AudioBlock<FloatType> osBlock;

osBlock = oversamplingSynth->processSamplesUp(block);

AudioBuffer<FloatType> osBuffer(
                                (FloatType*[]) { osBlock.getChannelPointer(0), osBlock.getChannelPointer(1) },
                                2,
                                static_cast<int> (osBlock.getNumSamples())
                                );

//adjust sampleRates
double f = oversamplingSynth->getOversamplingFactor();
synth.setCurrentPlaybackSampleRate(sampleRate * f);
ampEnvelope->setSampleRate(sampleRate * f);

synth.renderNextBlock (osBuffer, midiMessages, 0, static_cast<int> ( osBlock.getNumSamples() ) );

oversamplingSynth->processSamplesDown(block);
osBlock.clear();

I will now try to just oversample once, process both stages and then downsample.