Dsp::oversampling internal buffer size issue

as I understand it, dsp::oversampling is made so that we use it like that :

void prepareToPlay (double sampleRate, int samplesPerBlock) override
{
    oversampling->initProcessing ((size_t) samplesPerBlock);
}

void processBlock (AudioBuffer<float>& buffer, MidiBuffer&) override
{
    // ... create block from buffer
    oversampling->processSamplesUp (block);
    //... processing
    oversampling->processSamplesDown (block);
}

But if the buffer given by processBlock() has more samples than the samplesPerBlock previously given by prepareToPlay(), the internal buffer is not resized and we get crashes.

I suggest to remove the assertions like :

jassert (inputBlock.getNumChannels() <= static_cast<size_t> (ParentType::buffer.getNumChannels()));
jassert (inputBlock.getNumSamples() * ParentType::factor <= static_cast<size_t> (ParentType::buffer.getNumSamples()));

and replace them with something like :

ParentType::buffer.setSize (inputBlock.getNumChannels(), inputBlock.getNumSamples() * ParentType::factor, false, false, true);

samplesPerBlock is actually max sample per block and you are not supposed to have block > samplesPerBlock

no, you can’t make that assumption

from the doc :

The maximumExpectedSamplesPerBlock value is a strong hint about the maximum
number of samples that will be provided in each block. You may want to use
this value to resize internal buffers. You should program defensively in
case a buggy host exceeds this value. The actual block sizes that the host
uses may be different each time the callback happens: completely variable
block sizes can be expected from some hosts.

that’s why some tests done by PluginVal call on purpose prepareToPlay() with a smaller samplesPerBlock than in processBlock().

note that buffer.setSize (numChannels, numSamples, false, false, true) will no reallocate if the buffer is already big enough.

this is a stupid requirement imho.
Those host should be banned.

yes. Here is another thread about that : BlockSize and BufferSize

I think that a ‘defensive’ check (splitting the buffer to smaller chunks if necessary)
should be added directly within Juce, so that we can just stop thinking about it :slight_smile:

2 Likes

+1 for the suggestion to add this to JUCE directly.

In practice, almost all hosts will not call you with a large block size than prepare. However, pluginval validates against the juce::AudioProcessor API which very explicitly states that this can happen. That’s why the test is there.

The use case may vary but I think this behaviour can be seen in the wild if you have an AudioProcessor instance connected directly to the audio callback (for example via an AudioProcessorPlayer) on some phones…

I agree, it would be nice if JUCE handled this internally. I’d even be happy with simply bailing out of the process block if the size is larger than prepared (that’s what we do in our plugins).

However, whilst this is still in the AudioProcessor API, you have to account for that and that means testing…

If a host is really sending more samples in processBlock than what has been told in prepareToPlay, then it’s buggy. It’s already enough work to deal with the “normal” behaviour of DAWs (bypass state impossible to detect in some DAWs, Live ignoring the “not automated” state of parameters, Logic and no sidechaining = self sidechaining, FL Studio and buffer size changes at every processing call etc.), I would never accept to add some code to deal with their bugs. It would be more efficient to tell them to solve their issues. Moreover, to me the official SDKs documentation says that “maximumExpectedSamplesPerBlock” is supposed to be strictly the maximum, and not something else. It would be a lot of useless pain to deal in every single algorithm with this, and we would need to reallocate stuff all the time in the audio thread.

Devs who decided it’s not a problem to have more samples in the processBlock function in their DAW should go in a hell where they would have to code for all eternity a new DAW from scratch which deals with a majority of hellish plug-ins having memory leaks, CPU spikes, thread safety issues, wrong minus signs, memory allocation on the audio thread, mono IIR filter objects being used in stereo, no anti aliasing on font rendering with very little font sizes, and global variables all around the original code, all written in plain C instead of modern C++. And they would have to make their DAW enough stable so that it would never crash, otherwise a little devil would burn their asses at every crash for 30 minutes.

2 Likes

like DspModulePluginDemo? :wink:

DspModulePluginDemo fails PluginVal tests

+1
at the moment any plugin using Dsp::oversampling will fail those tests, unless we call initProcessing() in the processBlock if the buffer size gets bigger, but the algorithm isn’t design to do so

Again, it seems to me that the documentation in SDKs (not in JUCE) has always stated that this maximumSize is supposed to be strict, and that no DAW can do differently without infringing the rules of the plug-in / SDK standard. If we accept that, then why not taking into account sample rate changes in processBlock as well ?

About the memory allocation in the plugin demo, it’s probably the Convolution algorithm, which performs once some memory allocation at impulse response change in the audio thread. That could be still improved of course. I do know already some of its real world issues, and they will be solved in the JUCE base code soon.

Really? In reality, this is what programming plugins is largely about, debugging esoteric and un-documented behaviour then providing work arounds for them. All plugin developers that wish to be compatible with at least the “big” hosts do this to some extent or another.
In addition to your examples there was a thread not so long ago about Cubase (and now BitWig I’m told) calling setProgramNum immediately after setStateInformation and all the possible work arounds, which everyone (including us) has had to implement.*

This depends hugely on the specific company. I’ve had some tickets open with plugin developers for multiple years over what is obviously a bug in their code which has simply never been fixed. Do I simply just never support these plugins and be labelled as an “incompatible, buggy host” or do I add work-arounds for these plugins the same as every other host does?

Yes, as I’ve said, this would be true if the comment wasn’t there for juce::AudioProcessor but it is and has been seen in production.
My preferred way of dealing with this would be as follows:
• The JUCE plugin wrappers detect any calls larger than the “preparedSize” and either split these up in to smaller chunks or (and this is fine for me too) simply avoid processing in these cases
• If this were to happen, we could remove the comment from juce::AudioProcessor::processBlock and then I could remove the test for it in pluginval
• Further, the comment in AudioProcessor should be changed to explicitly state that it is a contract violation to call processBlock with a bigger buffer size than it was prepared with (as a bonus, this could also be a C++20 Contract!)

It’s worth pointing out that whilst memory allocation (or deallocation) should be avoided in real-time threads for real-time-guarantees, it’s not actually a plugin SDK or JUCE API violation. It’s just bad practice. This is something that pluginval tests for in order to help developers where they may have introduced these by accident, need some help in finding them or simply want to improve the performance of their code. This is also why this happens on levels greater than 5. Level 5 should be seen as “plugin host conformance” and up to 10 as “implements all the best practices and stress tests your plugin looking for possible issues”.


* As an aside, I find this behaviour so baffling I can’t figure out how this has arisen. There must be something I’m missing or maybe this is a mis-communication bug in the JUCE wrapper. However it’s been there for many multiple months now and to my knowledge no one is saying “I’m just not going to support Cubase or BitWig”.

2 Likes

Hello !

Oh I hear your pain, I can’t start to imagine how something like that can be frustrating on the DAW dev side, with customers complaining their favorite (but buggy) plug-ins are not supported. I have also experience with Bitwig devs which were able to reproduce a serious bug I reported after 5-6 mails with videos, they told me they would do something about it then, but the issue was still there and annoying me as a user 2 years later (I bought Ableton Live at this point lol)

I would be supporting JUCE team if they add a detector which removes any processing when the problem happens.