Best way to draw audio buffers

I was wondering if anybody here had any advice, in terms of general strategies, data structures, etc., about the best approach to drawing audio buffers in terms of performance (fast draw, redraw), in terms of accuracy (i.e. no aliasing), and in terms of zooming (can zoom in/out quickly). This is implemented in 99% of audio applications out there, and was wondering how people do it so fast. My approaches have generally been somewhat satisfactory, but they’ve been much slower than commercial apps.

Any advice would be greatly appreciated.

Someone else just asked this on a thread this week - I suggested using drawVerticalLine as the best solution.

I’ll give that a go – I was thinking about experimenting with doing low-pass filtering on the audio, try segmented image buffers, multithread, etc. But it looks like my first approach was pretty much ok since it’s similar to your juce demo audio stuff. I just average n number of samples per pixel (based on zoom level) and plot that. But I wasn’t using drawVerticalLine, so that might be the key for speediness… Thanks!

I guess for zoom levels, I’ll just switch do a different, interpolating drawing mode if zooming down at the audio sample level.

Did you see the AudioThumbnail class…? It kind of does all this for you…

Um. No I did not see that class. I just checked it out and got it working in a jiffy – literally 5 min. Damn it.

I am not entirely sad I spent many long hours trying to figure this out as it was good experience, but your code is much more sophisticated than mine and does exactly what I was trying to do (and was close to figuring out, mind you). But I wish I had read this reply sooner… I knew the key was caching and multiple threads.


Now I need to figure this out for spectral representations – you don’t happen to have a class that does that too, do you? If not, I got your AudioThumbnail class to check out to reproduce it with FFT data.


No, sorry, haven’t got a spectral thumbnail class!

Can u alter the AudioThumbnail class to accept the audioformatReader so that i can accept the AudioSubsectioReader Too. I use lot of AudioSubsectionReader in my projects and there is no way i can draw those subsections.
Also it would be useful if u use the AudioSource as input for the AudioThumbnail. I apply automation to the source , so i need the wave form to display after applying the automation to it.

Well, not really - it’s important that the thumbnail can create and delete readers itself - that’s why it uses an inputsource rather than just taking a reader directly.

I suppose it’d be possible to give it some kind of virtual method that creates the reader from an input stream, so you could overload that to do your custom stuff…

Hi Jules,

I m using a wave file and ogg file. Both are same. I m using AudioFormatReader for getting samples and displaying. The Paint method is exactly same for both type of files. but there is a lot of difference in the wave form.

Do i need to make some adjustment in the sample values coming out of the ogg format reader.

The waveform displayed with wave audioformat reader is looking fine.


Well, obviously a compressed file won’t be identical to an uncompressed one, but what do you mean when you say it’s “different”?

yeah i know there will be differences. But the differences should not be in terms of formation of waveform right.

I m not able to find the difference between the silence and vocal voice. It looks like they are in the same db value.

So the whole thing is just flat?

here is the screen shot.
The upper portion of the image depicts wave file form and the next portion depicts the ogg file form.

we can see a lot of differences in the wave form.

Have you tried playing the ogg file back in a juce-based player? (e.g. the juce demo).

No i dint tried opening it in jiuceDemo.

But i m taking 2000 samples from the file.
i mean averaging the totalsamples/2000 makes it one sample.
And for displaying the wave form i still averaging out the samples from the 2000 samples.

If it is a bad idea then it should have been shown wave file also as flattered right.

I asked about playing the file because you need to make sure that it’s not full of junk, and that the juce ogg reader is reading it correctly. There’s no point worrying about the waveform display if you don’t even know whether you’re shoving garbage into it.

yeah i played it using the juceDemo as well as the application i created .
i can hear the voice properly. more or less the same as wave file.

Ok, that’s strange then…

Can u tell me which logic ur using in the juceDemo app for displaying the wave file. I have not seen the wave form though in the juceDemo.

It’s just showing the live input level, it’s not using AudioThumbnail.