I’m currently building some OpenGL accelerated real-time-data plot components for various different kinds of plots. Some of them should display dB scaled results, however the source data is linear scaled. Now I’m thinking if computing the dB conversion in the vertex shader will result into a significant performance enhancement compared to compute the dB values on the CPU. I’m talking about approx. 1000 to 30000 dB values per rendered frame.
On the one hand I’m pretty sure that
std::log10 takes a significant amount of CPU cycles and imagine that parallelizing this computation on a per-vertex basis could result in a performance enhancement, but on the other hand, I have no idea on how good a GPU would perform on computing the logarithm and most of all if there is even a general answer if this is efficient for the whole range of GPUs out there (from smartphones to desktop computers…)
So I’d like to pass this question to the forum - could this make sense?