AudioRecordingDemo Paint Optimization


#1

The AudioRecordingDemo takes about 50% CPU on my modern macbook pro just to draw the live audio waveform before actually recording. Are there any steps can can be taken to optimize this?


#2

A couple of key questions…
Are you building release or debug?
Did you use a profiler to determine that figure? (If not you should)
If you did, did you determine where most of the time was being spent?


#3

Release. Yes, I used Time Profiler built into Xcode. 82% of the total CPU of the DemoRunner app (which is 50-60% of system CPU according to Xcode and Activity Monitor) is from juce::AudioVisualiserComponent::paintChannel which calls CoreGraphicsContext::fillPath which calls CGraphicsContextDrawPath.


#4

Any ideas? Seems like JUCE is using best practices from Apple, so not sure what else to try. Maybe this is as good as it gets?
https://developer.apple.com/library/content/documentation/2DDrawing/Conceptual/DrawingPrintingiOS/GraphicsDrawingOverview/GraphicsDrawingOverview.html#//apple_ref/doc/uid/TP40010156-CH14-SW4


#5

Is the painting throttled at all? Maybe it’s just trying to update the audio waveform as often as it can.


#6

This JUCE’s demo code - not my code - but it looks like it uses a JUCE AudioVisualiserComponent which uses a Timer updating at 60fps. Of course I tried lowering this value, but to get a smooth waveform it requires at least 30fps and that doesn’t save much CPU.

Is anyone from JUCE concerned about the CPU of this code? I’m wondering how, for instance, a DAW application would simultaneously display 8+ of such waveforms during real-time recording. Perhaps this JUCE example is not a solution for this and I need to use shaders or some other means?


#7

The demo was not written with performance in mind. The bottleneck of the code is that it is using 2D paths to draw the waveform which is always slow to render. You should consider rendering the waveform with a bitmap instead (where you manually fill in the pixels with direct memory writes for example) or, even better but much more complex, write a GL shader to do the rendering for you.


#8

Thank you. I’ve also discovered that JUCE_COREGRAPHICS_RENDER_WITH_MULTIPLE_PAINT_CALLS help greatly. With a single opaque AudioVisualiserComponent the entire parent component won’t be drawn… but that macro is necessary if multiple opaque AudioVisualiserComponent’s are used and it’s still not desired to have the parent repainted continuously.