I was wondering if anybody here had any advice, in terms of general strategies, data structures, etc., about the best approach to drawing audio buffers in terms of performance (fast draw, redraw), in terms of accuracy (i.e. no aliasing), and in terms of zooming (can zoom in/out quickly). This is implemented in 99% of audio applications out there, and was wondering how people do it so fast. My approaches have generally been somewhat satisfactory, but they’ve been much slower than commercial apps.
I’ll give that a go – I was thinking about experimenting with doing low-pass filtering on the audio, try segmented image buffers, multithread, etc. But it looks like my first approach was pretty much ok since it’s similar to your juce demo audio stuff. I just average n number of samples per pixel (based on zoom level) and plot that. But I wasn’t using drawVerticalLine, so that might be the key for speediness… Thanks!
I guess for zoom levels, I’ll just switch do a different, interpolating drawing mode if zooming down at the audio sample level.
Um. No I did not see that class. I just checked it out and got it working in a jiffy – literally 5 min. Damn it.
I am not entirely sad I spent many long hours trying to figure this out as it was good experience, but your code is much more sophisticated than mine and does exactly what I was trying to do (and was close to figuring out, mind you). But I wish I had read this reply sooner… I knew the key was caching and multiple threads.
THANK YOU!!!
Now I need to figure this out for spectral representations – you don’t happen to have a class that does that too, do you? If not, I got your AudioThumbnail class to check out to reproduce it with FFT data.
Can u alter the AudioThumbnail class to accept the audioformatReader so that i can accept the AudioSubsectioReader Too. I use lot of AudioSubsectionReader in my projects and there is no way i can draw those subsections.
Also it would be useful if u use the AudioSource as input for the AudioThumbnail. I apply automation to the source , so i need the wave form to display after applying the automation to it.
Thanks
Well, not really - it’s important that the thumbnail can create and delete readers itself - that’s why it uses an inputsource rather than just taking a reader directly.
I suppose it’d be possible to give it some kind of virtual method that creates the reader from an input stream, so you could overload that to do your custom stuff…
I m using a wave file and ogg file. Both are same. I m using AudioFormatReader for getting samples and displaying. The Paint method is exactly same for both type of files. but there is a lot of difference in the wave form.
Do i need to make some adjustment in the sample values coming out of the ogg format reader.
The waveform displayed with wave audioformat reader is looking fine.
But i m taking 2000 samples from the file.
i mean averaging the totalsamples/2000 makes it one sample.
And for displaying the wave form i still averaging out the samples from the 2000 samples.
If it is a bad idea then it should have been shown wave file also as flattered right.
I asked about playing the file because you need to make sure that it’s not full of junk, and that the juce ogg reader is reading it correctly. There’s no point worrying about the waveform display if you don’t even know whether you’re shoving garbage into it.