[DSP module discussion] New Oversampling class

Thanks for making this available.

So anyone has tried the new dsp::Oversampling class yet ?

I have not yet, but I plan to later.

Is there online-documentation for the dsp::Oversampling class? I do not find it searching for “versampl” on this page: https://www.juce.com/doc/classes.

Is there a demo project utilizing the dsp::Oversampling class?

How do I install the dsp::Oversampling class? I find no trace of it in my installation from early August (but maybe it would suffice for me to download the latest from here https://www.juce.com/get-juce/download)?

1 Like

Hello !

The class in on the develop branch for now, that’s why the documentation is still not available on https://www.juce.com/doc/classes yet. You’ll find the new class there and an updated version of the DSP module plug-in demo showing how to use dsp::Oversampling class :wink:

1 Like

OT: Might be worth to add a www.juce.com/doc-devel/classes to encourage people to use new features, especially with the fixing policy towards the master branch, when sticking with master does only make sense for legacy/released projects, but not for projects currently in development (and that’s when you need documentation most…)

I didn’t do any deeper testing but I think it would be better if Oversampling::getLatencyInSamples() returns an integer latency and - in case there would be a fractional delay introduced - an internal delay would be applied automatically before downsampling.

Otherwise each dev would need to add this for each project. I might be wrong but I don’t see a use-case where having a fractional delay would be desired.

Ideally, an extra delay could be given to compensate for any delay in the upsampled data (e.g. upsample 4x, process with 2 samples delay will introduce a delay of 0.5 samples in the downsampled data).

Well, I disagree here. The Oversampling class is doing exactly what its name says : oversampling.

The reason why the function Oversampling::getLatencyInSamples() doesn’t return an integer value is because most of the filters that can be used inside introduce a not integer latency, and it’s a way to say that it’s up to the developer to deal with it. It’s also a reminder that using IIR filters (with the polyphase allpass filters) does introduce latency in contrary to a myth that was caused by the massive use of well known c++ files that are still available on music-dsp.com, even if doing so is called the “minimum-phase” approach. And even if all the filters are FIR linear phase, the additional resulting latency could be not integer as well, because of the oversampling factor dividers.

So what should be done now that we know the latency is not integer ? Obviously we see that having a dsp::Delay class would be useful. Not only to compensate the extra fractional delay, since the DAWs expect integer latencies in the API and for any parallel processing to work properly, but also to implement internal dry/wet in a plug-in if needed. And it’s not here so it’s up to you to code one in the mean time.

Another thing which could be a temporary solution, is to make your own custom Oversampling class, which herits from the original, so you can change the filter initialisation in the constructor and choose designs which return an integer additional latency, by tweaking the filter design methods arguments until (std::abs(latency - round(latency)) < epsilon).

One other thing I would like to say is that dealing with fractional delays properly is not as easy as it sounds. If you don’t want your code to introduce additional lowpass filtering, that could be very noticeable, you have to forget about FIR polynomial interpolators (linear and Lagrange) and use IIR instead (CatmullRom, Allpass, Thiran etc.). Polynomial interpolators is a very interesting area and not that well known. Moreover, most of the documentation about it is quite confusing, or even misleading (Dattor… cough).

So now, what I can tell you is that the JUCE team is well aware of these issues, and that I have already some classes in my base code to deal with all of that, but there were some limits to what could be included in the first iterations of the DSP module otherwise the deadline would have never existed. And you’ll see other iterations in the future for sure ! We have already a lot of ideas, but I’m not in position to talk for the JUCE team about that, since I’m a freelance :wink:

And before that, I’m afraid you have to rely on your own delay code or on custom Oversampling classes to deal with all the issues. I don’t think most of the people interested in the oversampling class don’t have already their own delay class anyway. And as I said, having simply an additional fractional delay in the oversampling class would be a temporary solution since some delay is necessary when parallel paths are available in the plug-in itself. What I can say for now is that the choice of the designs in the original oversampling class are not that random, and for most of the uses it’s fine to use it as it is and to add a round(latency) where you need to report latency to the host. For doing a better job, the only solution I would find acceptable would be to have a proper dsp::Delay class.


Yes please!

I’m adding my own, but the DSP class really should have one!


The class in on the develop branch for now, that’s why the documentation is still not available on https://www.juce.com/doc/classes yet. You’ll find the new class there and an updated version of the DSP module plug-in demo showing how to use dsp::Oversampling class :wink:

Thank you.

What’s the safest way to change the oversampling factor on the fly?

i saw this line in the updated demo:

processor.stereoParam->operator= (index);

Any particular reason for using that notation to assign the index?

in general, can you guys add waaaaay more documentation/commentary to that Demo project’s source code that explains why you’re doing the things you’re doing?

Hey :wink:

One thing I did in a project of mine was to create 3 Oversampling objects because I had three “quality” options. I have an index variable that is being used to set the current one. And when I change the quality parameter, I update the index variable, and I update everything by calling prepareToPlay(getSampleRate(), getBlockSize()) in the updateProcessing function to set the new sample rate with the new oversampling factor.


I guess it’s in the plug-in editor :wink:

The thing is the AudioProcessorParameter classes provide two values for a given parameter, the one between 0 and 1 which is a float number, and another one which can be either a boolean, an integer value, a float or a double real number. So basically, the developer has to deal with a normalised value and a value with a given mapping.

And the classes AudioProcessorParameter in JUCE, available since JUCE 4.2 or 4.1, are not giving very explicit ways to access to these different values. That’s why I used this notation, to be sure of what I was doing there (setting the mapped value and not the normalised one).

Anyway, these parameter classes are not meant to be used, they are just examples of what should be done to handle parameters in a plug-in. Since JUCE 4.2, it is expected for the user to create his own AudioProcessorParameters classes, inspired from the ones that are already there, mostly to find a way to handle the parameter changes with a lambda function for example.

Hey, is the oversampling class limited to a max of 16x oversampling, or can it use any 2^ factor?

Hello !

As said in the documentation, the class is suitable for 2x, 4x, 8x and 16x oversampling only, mainly because it is done with a multi-stage approach (2x for every stage) instead of one stage which would be less efficient for a lot of reasons.

Thing is when I initialize the class with higher (32x, 64x or even 128x) oversampling it doesn’t appear to cause any problems, and seems to remove aliasing even better. What exactly makes it not suitable for > 16x?

edit: at least when using the polyphase IIR

Well, you should get an exception in debug mode :slight_smile:

Anyway, I’m not sure it really makes sense whatever you are doing to oversample 32x or higher in an audio effect ! Are you still getting some aliasing under 32 times ? Have you set the “high quality” boolean in the oversampling constructor argument ?

Yes, I do get (very subtle) amounts of aliasing at <32 times, where higher factors remove it further, obviously I’m doing some extreme testing. I do not get any exception when debugging using the polyphase IIR up to 256x oversampling, however I do get an array access violation exception when using FIR, but only when it reaches 128x oversampling, otherwise it’s fine before.

Just to be clear, when I initialize the oversampling class using something like this:

oversampling = new dsp::Oversampling<float>(2, 4, dsp::Oversampling<float>::filterHalfBandPolyphaseIIR, false);

That means it’s using 2^4 = 16x oversampling right?

High quality doesn’t seem to make much difference from my testing.

1 Like

Also, I don’t know if this is possible but is there some way to do intersample peak detection with this while processing at the same time? The problem usually is that the downsampling filtering introduces new sample peaks - before I was just using single stage downsampling so I could simply get the peak just after the anti-aliasing filter but before the decimation. However I don’t know how I can get the resulting intersample peak when there are multiple downsampling stages, or do I have no other option but to do a second round of oversampling just to get the ISP?

This access violation stuff is strange, but anyway the jassert should do something at least in debug mode if you try to do oversampling 32 times or more !

When my PlotComponentDemo app will be done, I will include something to test the oversampling functions as well, so I should be able to remove anything suspicious.

For the ISP detection, I’m afraid you have to do a second round of oversampling, or you could run a custom version of my class with the detection embedded inside…

You’re right, I do get an assertion, turns out I was “debugging” a release build rather than a debug build. Still, if I just comment out that assertion, then everything works fine with the IIR, and fine up to 64x on the FIR, is there any particular reason I shouldn’t be doing this?

Also, it seems your oversampling class might not be suitable for ISP detection, or it least it seems to generate different peak magnitudes than the intersample peaks detected with K-Meter.

I am already using a custom version of your class, but I’m not sure how (if it’s even possible) to embed detection while also accounting for the new peaks introduced during the multiple stages of downsampling, I’m not sure it’s possible to detect them without upsampling again after the downsampling.

Well, the thing is I didn’t really tried to optimize further the way the filter coefficients are generated when the oversampling factor is high. And the most important reason I didn’t want people to use the oversampling that way is because for me, more than 16x oversampling is always overkill for solving any specific aliasing issue. Oversampling is not just about filtering like hell aliasing, it’s also something that multiplies by the oversampling factor the CPU load of the processing embedded inside !

So if you still have too much aliasing in any specific context even with 16 times, maybe what you really need to do is to code a custom class heriting from Oversample so you can customize the filter design functions in the constructor and make them more powerful.

About the ISP detection, your concern is more than relevant, and after some additonal thought I think I can say that embedding it inside processing oversampling might not be the best thing to do in general, since multi-stage oversampling is the thing widely used for oversampling 4 times and more for obvious reasons.

The ideal situation there would have been to have only one stage oversampling algorithms, and a detection made after the downsampling filtering before the downsampling itself, to take into account the implication of the filtering on the ISP.

But in general, I think most of audio developers have multi-stages oversampling for obvious reasons, so the only pratical and general solution is additional oversampling after to do the detection. That’s what happens for example if your meter is in a dedicated plug-in. Then, the question is how to perform that oversampling without getting something not relevant because of the filtering… I guess I could find some bibliography on this specific issue, right now unfortunately I don’t have the answer…

1 Like