I'm trying to build a custom component to draw waveforms, but I'm having trouble in the zoom in I don't understand how to draw a wave form propperly when I have a big amount of zoom. I'm using max/min method and drawing vertical lines for each max/min sample. Please help me, I'm not sure if this is the propper way of drawing wave forms i found this in stackoverflow.
This is link
This is a nice example to understand how to draw a waveform
I suggest you have a look at the AudioThumbnail class.
When you zoom all the way in you might want to change the drawing algo.
I've yet to make one I'm 100% happy with (usually the anti-aliasing buggers up) ...
(Complete tangent follows....)
...but the important thing (assuming it's audio) is when you are zoomed in is to draw it as a continuous line instead of 'steps'.
Otherwise the DSP nerds will look down on you and make tutting noises ;-) And besides, it looks more beautiful;)
I suppose really, for a proper representation you need to do something which I can't even comprehend without looking at a book, but actually reconstruct the output waveform properly ...
http://www.dspguide.com/ch3/2.htm etc. etc.
Surely showing the steps is more accurate though? What this says is that the sample value during this time period is whatever the step is.
If it was a curve (or series of lines) connecting the start of each step this would indicate the sample values are continuously changing.
I'm not saying this is the most aesthetically pleasing though.
I guess it depends if you are representing the content of the sample buffer or the sampled data. Maybe the sample points without the joining staircase is a good view of the data - but would look weird I think.
It's more interesting to represent the output after the DAC though? Alternating high low points would be a nice (say 22kHz) sinewave.
Maybe the #1 design would have the waveform super-imposed on the sample points. CPUs need something to do after all..
There's a project for a rainy Sunday.
As bazrush said, it helps if you look at your zoom level and then decide if you've got more pixels than samples (oversampling) or more samples than pixels (undersampling) and choose your rendering method appropriately.
The undersampling's a pain because you end up with all the potential aliasing problems we've all come to know and love in the DSP world. Fortunately you can usually take a shortcut and use the min/max of the range of samples represented by each column of pixels. I wouldn't use the mean average because that's often going to be zero if the waveform's being sampled over a number of its cycles. With this method you'll often get a visual 'pop' on changes of zoom level but you can get away with this because the whole view's changing at the same time. Where you can't get away with a 'pop' is scrolling - but if you pay attention to how your pixel -> sample scaling is quantized this is usually fixable after a bit of shouting at the screen.
The above scheme will work functionally but if your sample's huge you're going to have performance issues where each pixel needs to sample thousands of points in the waveform. Best way to solve this is by pre-processing the waveform data to generate a lower resolution copy (eg. where each point in the lower resolution version represents the min/max of 16 samples from the original waveform). More elegantly, carry on progressively downsampling each low resolution version until you end up with a tree whose root will hold a single sample representing the min/max of every single sample in the original waveform (just like mipmap textures in the graphics world). If your waveform is editable, you'll also have to rebuild the affected parts of the tree.
Juce has the AudioThumbnail class to calculate and store lower resolution waveform data but I seem to remember looking at it before and thinking that it only stored a single level, so is less suited to arbitrary zooming - someone correct me if I'm wrong.
BTW with the oversampled view, my favourite style is dots to represent the sample points with linear connecting lines in between. The dots are the bit that DSP theory cares about, and the connecting lines help me mentally conceptualise the actual shape of the waveform. Staircase is also fine but you really still need the dots to tell the user if the left, centre or right of each step represents the actual sample point.
Where you can't get away with a 'pop' is scrolling - but if you pay attention to how your pixel -> sample scaling is quantized this is usually fixable after a bit of shouting at the screen
What exactly did you shout at your screen? I've got this problem and not spent enough time on it yet ;-) I've already had some do some performance optimisation by downsampling the audio into a temporary smaller buffer.
More elegantly, carry on progressively downsampling each low resolution version until you end up with a tree whose root will hold a single sample representing the min/max of every single sample in the original waveform
Makes it idiot proof?
One thing I've done in the past is use several juce::AudioThumbnails at various resolutions (2048, 1024, 512, 256) and used the appropriate one for the zoom. To be honest, there's not much difference. The overhead of creating the RectangleList and then drawing it is the main bottleneck of the thumbnail (or if you draw the waveform in another way).
If you zoom the juce:AudioThubnail in further than the cached resolution it will re-read from the source. Although this disk reading is slow you're usually only reading a small number of samples at this point so doesn't have a drastic effect.
The exception to this is if you're reading a compressed format that is slow to decode ('cough' mp3 'cough'). The lag at this point can be noticeable so you might want to do some clever background thread reading of these files to avoid stuttering in the message thread. Of course this then becomes a whole can of worms as you'll have to repaint the waveform whilst the file is buffered and you could get flickery displays as it is buffered. The trade-off is accuracy, performance and effort.
What exactly did you shout at your screen?
It was a long time ago but I remember achieving it through my usual approach of thinking about it carefully to begin with and then patching up all the little crappy bits as they came along, my brain's never been good at these 'off by one' cases :)
I think the key was was thinking about it like this :
- At some point in time, you have a view of the waveform. For the sake of argument, let's choose a number and say it starts at 'pixel 100.0' in the waveform (obviously, this translates to some actual sample index in the waveform, depending on the current zoom level)
- The user scrolls the view right by approximately one pixel so it now starts at 'pixel 99.x' (where x represents some fractional value)
- Assuming you're not doing some posh sub pixel sampling, you can quantize the starting pixel position to any number between 99.0 and 99.9999 without any visible difference to where the view begins horizontally on the screen
- Then ask : how can I quantize that starting pixel position in such a way that the next pixel (pixel 100.0 before the view was scrolled) queries the same exact range of samples that it did before.
- Shout at the screen and mash keys until it works
Upon reading that, I'm not sure if it just confuses more! In short, at any given zoom level you should be able to quantize the position at where the view begins in such a way that whole pixels will always repeatedly ask for the same range of samples.
The downsampled tree stuff didn't help with this, but didn't make it any more complicated either. I seem to remember that a low res approximation worked OK as long as you chose an appropriate level in the tree and stuck with it for a particular zoom level. I don't think there was a need to use a level in the tree and also have to 'patch it up' with higher res data to make the scrolling work as described above.
> The overhead of creating the RectangleList and then drawing it is the main bottleneck of the thumbnail (or if you draw the waveform in another way).
Yes I can imagine that. I was using OpenGL and redrawing on every single frame which I thought might be a performance problem, but it barely registered on the iOS devices I profiled it on. These hardware renderers eat verts for breakfast and still have plenty of room for a fill rate lunch!
If I did it again I'd probably do a render-to-texture whenever the view needs a repaint and then just draw that texture to the screen as a 2D sprite thereafter, but I think Juce does that anyway?
Coding's funny isn't it? Drawing a waveform to a screen would sound like such a simple task from the outside!
Ah - cheers - there's a good idea or two there I'll try :) Much appreciated!