I was wondering if anybody here had any advice, in terms of general strategies, data structures, etc., about the best approach to drawing audio buffers in terms of performance (fast draw, redraw), in terms of accuracy (i.e. no aliasing), and in terms of zooming (can zoom in/out quickly). This is implemented in 99% of audio applications out there, and was wondering how people do it so fast. My approaches have generally been somewhat satisfactory, but they’ve been much slower than commercial apps.
I’ll give that a go – I was thinking about experimenting with doing low-pass filtering on the audio, try segmented image buffers, multithread, etc. But it looks like my first approach was pretty much ok since it’s similar to your juce demo audio stuff. I just average n number of samples per pixel (based on zoom level) and plot that. But I wasn’t using drawVerticalLine, so that might be the key for speediness… Thanks!
I guess for zoom levels, I’ll just switch do a different, interpolating drawing mode if zooming down at the audio sample level.
Um. No I did not see that class. I just checked it out and got it working in a jiffy – literally 5 min. Damn it.
I am not entirely sad I spent many long hours trying to figure this out as it was good experience, but your code is much more sophisticated than mine and does exactly what I was trying to do (and was close to figuring out, mind you). But I wish I had read this reply sooner… I knew the key was caching and multiple threads.
Now I need to figure this out for spectral representations – you don’t happen to have a class that does that too, do you? If not, I got your AudioThumbnail class to check out to reproduce it with FFT data.
Can u alter the AudioThumbnail class to accept the audioformatReader so that i can accept the AudioSubsectioReader Too. I use lot of AudioSubsectionReader in my projects and there is no way i can draw those subsections.
Also it would be useful if u use the AudioSource as input for the AudioThumbnail. I apply automation to the source , so i need the wave form to display after applying the automation to it.
I m using a wave file and ogg file. Both are same. I m using AudioFormatReader for getting samples and displaying. The Paint method is exactly same for both type of files. but there is a lot of difference in the wave form.
Do i need to make some adjustment in the sample values coming out of the ogg format reader.
The waveform displayed with wave audioformat reader is looking fine.
I asked about playing the file because you need to make sure that it’s not full of junk, and that the juce ogg reader is reading it correctly. There’s no point worrying about the waveform display if you don’t even know whether you’re shoving garbage into it.