Computing decibel values in an OpenGL shader - a good idea?

I’m currently building some OpenGL accelerated real-time-data plot components for various different kinds of plots. Some of them should display dB scaled results, however the source data is linear scaled. Now I’m thinking if computing the dB conversion in the vertex shader will result into a significant performance enhancement compared to compute the dB values on the CPU. I’m talking about approx. 1000 to 30000 dB values per rendered frame.

On the one hand I’m pretty sure that std::log10 takes a significant amount of CPU cycles and imagine that parallelizing this computation on a per-vertex basis could result in a performance enhancement, but on the other hand, I have no idea on how good a GPU would perform on computing the logarithm and most of all if there is even a general answer if this is efficient for the whole range of GPUs out there (from smartphones to desktop computers…)

So I’d like to pass this question to the forum - could this make sense?

1 Like

Try it and see?

lol let us know what you find :grin:

The only thing I can say for sure is that GPU nowadays are very powerful, since you can do 3d games with them. But as a JUCE developer most of the time I only display 2d information with it, so I do think that every single JUCE developer underuses significantly GPUs in general. At the same time, CPU optimization is a serious issue, and every single % of power that you free will be welcomed for obvious reasons musicians and sound engineers can talk about for hours.

So I would say that providing in the shader the dB calculus might not be risky at all but instead it could be a very good optimization for parallelizing tasks thanks to your GPU. So have a try, and tell us if you saved CPU cycles in a visible way :wink:

However, my own tests suggest that doing some math operations 60 times a second with the CPU is never really a big deal, in comparison to processing 44100 samples a second. The locations where I really needed optimization in the past with visualization algorithms were most of the time in the painting function itself.

When you do these kinds of tests, knowing how to use a profiler properly is mandatory, a lot more than when you optimize DSP algorithms taking place in the audio thread, since the modifications you do there can be monitored easily even just with the CPU meter of your DAW. But for painting, you need to monitor the CPU power being used by the whole process, including all the threads, and you need to find where your CPU is slowing down in your code.


… and that is the most challenging part of it for me. As what I’m building at the moment only works in a non-trivial whole-application context I find it hard to make up a measurement setup that produces results that are really meaningful and comparable. Especially if have no experience with profiling shader performance

1 Like

I did some measurements with a GUI application, processing an 2048 point fft on the realtime audio stream and displaying an OpenGL accelerated oscilloscope and spectral analyzer with the spectrum being converted to dBs.

My results from the Xcode Instruments profiler are that CPU usage is lowered by approx. 3% when computing the dB values on the GPU, those 3% are consumed by calls to log10f when computing the decibels on the CPU. On the other hand I can see the average time for glDrawArrays in the OpenGL Profiler go up from 17.9 µsec to 20.0 µsec when computing them in the shader. Seems to me that nothing got worse but I have to admit that I’m not sure how meaningful those measurements really are :wink:

However, I think I’ll go on computing the dB values on the GPU for the more complex 3D plot that is about to get built next, as the number of data points to be plotted for that version will likely increase by a factor of 30.

1 Like

Great to see that it is possible to calculate it on the GPU.

There are also ways to approximate log and compute four items parallel with SSE2 for example. I think you don’t need a 100% accurate result for visualisation.


Good point, haven’t looked into log approximation strategies so far. Maybe I could even use those approaches on the GPU. Because of course you are definitely right that there is no need for maximum accuracy when it comes to visualisation.

1 Like

Shader calculations are a great idea in theory. Sadly, it totally depends on the target GPU and its drivers. This might be really great on your machine but not on somebody else’s (eg: GTX1080 versus an Intel HD Graphics 5000).

It’s important to keep in mind testing this on a few low end devices to be sure it’s kosher.