What is the best way to draw a Spectrogram using JUCE painting calls?

Hi! I am planning to add a rolling Spectrogram to one of my plugin. To make it clear, it is something like below, copied from a paper.

I know how to transfer the data from the audio thread to a background and how to forward the FFT. The problem is how to choose a proper way to reduce the pressure of the message thread:

  • should I draw each horizontal line as hundreds of rectangles or as a whole gradient?
  • should I cache the image and only updates the new line?

In general, I would assume a structure like this:

void background_thread_run() {
    fft.forward();
    lock_scope(mutex);
    // update some drawing things, like images/gradients/rectangle lists
}

void paint(juce::Graphics &g) {
    try_lock_scope(mutex);
    // if try lock succeeds, actual draw here
}

I notice that drowaudio/module/dRowAudio/gui/dRowAudio_Sonogram.cpp at 1d6e9efe1bd2edaeff8917b15424083bf652ec46 · drowaudio/drowaudio · GitHub uses the cached image + rectangles way. There are also some discussions here: Best method for sonogram display? .

If you are using OpenGL you could use a 2D texture, update successively horizontal lines of that texture (modulo width) and render texture so that the new line is always at the right edge of your frame. Works nicely and requires very little CPU/GPU.

1 Like

Thanks. I would stay with JUCE painting calls cause there are several elements that would take me forever to translate to OpenGL. Your idea seems to be close to cached image + gradient approach. I might try it first :slight_smile:

You could just render the colormap to a juce::Image in memory (at one pixel per time-frequency bin, and of course it makes sense to cache the already colormapped frames) and then draw it with a transform, which will then interpolate (gets you the gradients for free basically). But you should check on your target platforms whether it behaves as expected. The actual interpolation will be done (I think) by CoreGraphics / Direct2D and thus pretty fast. But I guess it might even be more efficient than drawing hundreds of gradients even with a software renderer.

Plan some extra time to test this on ProTools (macOS). It does some weird things that I’ve seen no other hosts do.

YMMV, I used this trick to some extent and it works well enough for those cases, but I haven’t fully investigated the ins and outs of this approach.

1 Like

Sure. I will try several approaches to see which is more reliable or consumes less CPU. I will definitely use a juce::Image in cache. One pixel per bin + Transformation seems to be a good idea, but it might not work well for log-scale freq axis? Technically I can also reduce the number of gradients using some explicit interpolation/smoothing methods. I will report it back if I get a practical implementation.

And thanks for the suggestions regarding Pro Tools. Somehow I messed up Pro Tools installation on my M chip MacBook :slight_smile: I should definitely contact the customer service sooner.

Updating an image in real-time might not be very efficient in Direct2D though, as it will have to go back and forth between RAM and VRAM. Maybe doing image strips of a given width would be more efficient?

2 Likes

That’s a good point! I need to be more careful about this. Perhaps using a FIFO to pass image strips from a background thread to paint()? I will try to do this.

There’s this saying going around about premature optimization. The my advice would be: build something simple that works first, then test and measure to find out whether you have a problem that needs solving.

The trick I mentioned doesn’t work with log scaling. But for that it may be a good solution to pre-warp the spectral data before turning it into an image.

2 Likes

I isolated and simplified my OpenGL spectrum display, maybe useful:

2 Likes