How to optimise

Hello, over the last couple of days I’ve been making an oversampler so that I can apply non-linear effects. When testing, my spectrometer in studio one shows that it’s working as intended. However, studio one reports that I’m using 10% cpu for this process which strikes me as rather too much. How do people optimise their code?

My process is: Copy input buffer to longer buffer whilst inserting zeros for every extra sample, filter to nyquist using IIRFilter, non-linear process, filter to nyquist using IIRFilter, copy every original sample location back to the input buffer.

Am I doing anything horribly wrong here?

Edit: I’m upsampling by 8x, is this too much for a plugin? I shaved it down to 7% with 4x (but still seems like a lot seeming as the most intensive plugins run at under 4% on my machine).

Stupid question : are you sure this is the oversampler itself which takes too much CPU ? And not your non-linear process which is now 8 times more CPU hungry ? To me the first obvious thing to do would be to optimize what you do inside the oversampler :wink:

What I’m doing is profiling the plugin itself by wrapping the DSP code in Python and running through any profiler.

Well, I’ve used that process without oversampling (along with an extra filter) before and it reads at 0% so although I’m not completely sure, I doubt that is the issue here.

Thanks, I will find out about python as I have never used it before.

If you can have a standalone app that just runs your processing on some data, then it works as well.
I just use Python because of all the scientific tools that are available and that can ease your pain (and simple wrapping as well!).
For instance all ATK has a Python layer, so I can assemble and test my pipeline in Python, profile it before releasing a C++ version of it.

Well for profiling, you can use your standard debugger+profiler in Xcode and VS as well, I do that all the time

Definitely profile. It will tell you more than anything else. but make sure you profile on release, using audio sample buffers on debug builds adds a bunch of overhead. Always a good rule of thumb to profile on release. You can set up settings in xcode / VS so you can still get symbols on release as well.

Is it possible in JUCE to have different methods for when playback is happening to when the track is being rendered? I’ve got the oversampling by 4x down to 3% cpu. I can live with this for now but if I were to only use 2x with playback and larger amounts when rendering audio offline it would be ideal to keep the detail only when it really matters.

AudioProcessor::isNonRealtime() ?

Thanks, that looks perfect.

Be careful with this one. From my experience barely any of the DAWs actually set the flag properly :unamused:

Probably because no one calls process offline anymore?
Having two different ways of computing something (online and offline) may lead to a different audible result, so not sure doing so without the user knowing is a good idea.

A lot of people making assets for games use offline processing, particularly with batch export tools. But yeah, I agree that you don’t really want the two results to sound different.

The main thing this is used for is when streaming samples. In real-time it’s not ok to block so often better to just drop a few samples if they haven’t been cached. However, when rendering you can’t do this, the CPU will tear along as fast as it can and you need to block if your hard disk hasn’t caught up and loaded the samples you need.

I don’t know how many hosts set this flag but we’ve been contacted by sample heavy plugin developers to ensure Waveform is setting it correctly so I assume they’ve done the same to other DAWs.