Update GUI quicker than audio thread (Oscilloscope)

Hi ! I’m trying to build a simple Oscilloscope. An issue was already opened but I didn’t find a proper answer.

I just don’t understand how is it possible to make an accurate oscillator using JUCE as the audio thread is quicker than the gui thread.

I use juce::Timer to repaint my oscilloscope at 60hz.

I understand that the main idea is for the GUI to fetch audio data (samples) from the audioProcessor then draw, but my GUI will always miss some of those datas because it is not drawing fast enough. I end up with an instable and buggy waveform…

I found some unsuccessful solutions :

  • Create really big buffers to store audio datas
    It kind of works but my computer is at 10fps.

  • Jump over some audio datas to reach end of buffer quicker
    Kind of works if I pass over a lot of data and my waveform becomes not accurate at all.

Should I use openGL to draw the data quicker ?

Could someone explain to me please the correct method to display a stable and accurate waveform with JUCE ?

Thank you! :slight_smile:

It is not necessary and impossible to display every sample. Audio consists of periodically repeating waveforms.

A real oscilloscope has a “trigger”, which usually is set to zero crossing raising signal. The electron beam waits until the trigger is satisfied and draws then.

In a GUI application we want to draw immediately, we don’t want to wait.

One possible solution is to put the samples into a circular buffer.
When the paint call happens, I go from the write head backwards by as many samples as fit on the screen. Then to make it more stable, you move even further backwards until you find the position of your trigger, i.e. a zero crossing with raising slope (or by looking backwards a descending slope).

Now you found the start sample to draw.

If you want to improve it further, you need to compare the found starting point with the previous drawing (e.g. via cross correlation). If the waveform to draw is too different, go further backwards if there is a trigger point that fits better.

2 Likes

Drawing more quickly doesn’t help, as you can’t see more quickly anyway. The trick is to condition the audio in a way that makes sense to look at. That’s basically what the trigger does.

I think a very basic approach would be this:

  • Allocate a buffer with enough space for as many samples as you want to draw
  • In the audio process, you need a trigger mechanism, which is initialized in an “armed” state, in which it waits for a trigger condition (e.g. rising edge above some threshold). When the edge is detected, it starts recording the audio into the buffer until it’s full, then does nothing. Trigger is now in “recording” state.
  • When the buffer is full, set trigger into “waiting” state
  • Have a timer that periodically checks the Trigger’s current state (which should be an atomic). When it’s in “waiting” state, trigger a repaint.
  • In the paint method, read what’s in the trigger’s buffer and draw it. Afterwards, reset the trigger state back to “armed”.

That’s how I would approach it, then see if that’s enough. Typically you’ll want to add some more details to the trigger mechanism, but in essence you’ll always have a buffer to record into and draw from, and some algorithm that decides when to start overwriting the buffer again.

If you want to add some more “analog” feel, you can do a thing where you draw to memory with some exponential decay as soon as the buffer is full. Which means multiply all pixels the image by some number below 1.0, and add the new pixels to that decayed picture. But you likely have to do some advanced thinking to get the timings and the process right.

1 Like

The drawback of that approach is, you are writing to the trigger, not knowing if a draw will occur or not.
In addition you need to make sure you are never writing to the buffer while reading.

In contrast the circular buffer approach I outlined above is thread safe, as long as it is large enough to hold th amount you need to draw (plus a little extra for the backward search) and the amount you are going to add while reading for the drawing. Just make it generous, RAM is cheap and it is not much.

If you fear the backward search (the cache would prefer you to search forward), then you can store trigger points in an additional circular buffer and use those.

It’s also thread safe, as the trigger is only armed (atomically) after the content is drawn. Audio thread only writes when in “recording” state, and UI thread only reads when in “waiting” state.

However, a problem may indeed be that new stuff will only be recorded after the old stuff is drawn, so what’s drawn may not be the most “current”. But it’s at most one draw cycle too old, so with a reasonable framerate that shouldn’t be a huge problem. Typically, most scopes have a “hold time” knob anyway to wait for a while before going back to armed state, because you normally want the display to update slowly enough so you have time to get some meaningful information out of it.

Thank you very much !

I was able to get a nice stable waveform with all your explanations.

If I understand well, the trigger defines when we should start drawing. And most of the time we use the crossing zero rule for the trigger.

I wonder, is that crossing rule really precise ? I mean that if I am playing one note on a synth that produce simple waveforms, the oscilloscope gets it, but if I am playing more than 2 notes at the same time, it starts squeezing a lot. I guess if the waveform is too complicated, the crossing zero rule cannot really gets it.

I know there are tools like “hold time” or maybe change the trigger level to see what’s going on but are there more precise trigger rules to capture more complexs waveforms ?

I recommend looking at a real oscilloscope for some inspiration. Normally using the zero crossings doesn’t work very well due to noise and ringing. Most scopes have different modes for triggering (e.g. rising or falling edge), as well as a trigger threshold. So they’d only trigger when a certain level is crossed upwards or downwards. And typically when measuring something you’ll need to fiddle about with the level a bit to get a good picture. Getting it always right for every possible signal without that kind of user interaction is very hard.

If you know something about the signal, you can select a better approach.
The naive approach I outlined first works well / good enough for monophonic signals, where you get usually but not necessarily only the one zero crossing for the fundamental.

With a polyphonic signal there are multiple zero crossings to chose from, so the frames you are drawing are no longer coherent.

That is why I wrote as last paragraph, that it works better if you check in addition the correlation of the signal to the last drawn block.

So the implementation is:

have a circular buffer for the ongoing signal
have a buffer for the last drawn block

when the video timer wants to draw:

  • go twice the num samples to draw back and start searching
  • when you found a zero crossing with the correct slope calculate the correlation with the previously drawn block
  • if the correlation is below a certain threshold skip and go for the next zero crossing

A better approach is to calculate the correlation at all possible zero crossings and pick the best.

The trigger hold is useful, if you want to observe a short signal and keep it frozen on the screen. I don’t think that’s what a musician would need. That’s rather for an electronics engineer, but you might implement it as separate mode, might be a good idea.

I think Juce should incorporate a circular buffer internally that is automatically fed with the audio, and easily accessible from the editor.

That would save us a lot of headaches. If you need an oscilloscope, a spectrogram, a level meter, or a signal detector, all you have to do is make a call to getAudioBuffer indicating the size and it will return a buffer with a copy of the sample block previous to the writing cursor of the circular buffer.

I wouldn’t want the circular buffer there by default, since you don’t know what you want to visualise. The signal at the input? The signal at the output? Some trigger signal?

You can use a third party library like my foleys_gui_magic, where this is all included, BSD v3 licensed.

Or I would probably start writing my own classes (if I didn’t already) and grow with it, as it is also a great learning experience.

the last few (or many) seconds of input and/or output audio accessible anywhere in the editor without worrying about threads, data, or pointers. This data can be used for analysis, display, or saving, without restrictions. It is basic and useful enough in a multitude of circumstances for it to be included.

Many thanks for all the answers !

I’ll try with the slope technique to see what I can get.

My goal is rather to display a smooth waveform no matter its complexity. I was thinking of smoothing all incoming values relative to the precedent ones just for fun !

That’s just a low-pass filter, so you will lose all the details. The best (but hardest) solution is to use auto-correlation. You look through the signal and find the closest match to the currently displayed signal. The closest match is displayed and used for comparison in the next display frame.

I did this taking into account the fundamental frequency of the sound to adjust the start of the blocks taken. If the sound is 100hz, it means that the cycle repeats every sampleRate / 100 samples, so the update rate does not matter, it is only necessary that the drawn block starts at any first sample of each cycle.

This only works with individual notes, I have no idea how to do it for combinations of notes, although why would we want to visualize chords? The resulting wave will be too chaotic for visualization to be useful. So it seems enough for me to see the individual sound of each note.