I would like to get some feedback on my process of using this audio level meter, I have created a method which returns the channel sample information which is being accessed by the level meter component.
After obtaining the values in the level meter component I have applied this equation to the material dB = 20 * log10(amplitude) where 0dB is the top of the components meter and -100dB is the bottom.
I’m somewhat skeptical on what I did here, that is the actual signal. I created a method to return channelData from the processBlock which contains channelData[ i ]. I was thinking use a linearSmoothedValue, but i’m really not sure i like that equation.
Well single samples come with a rate of typically 48000 samples per second, GUIs on the other hand update with around 30 frames per second. So you will miss a lot of samples. Anyway, displaying single sample values won‘t show you meter levels which are comparable to what we perceive as loudness. To get such values you should calculate the RMS (root mean square) of your audio date. There‘s a method already implemented into the AudioBuffer objects, however I won‘t recommend using that, as it depends and the buffersize and forgets the previous buffer instantly.
I would recommend calculating the absolute value of each sample and using a first order low-pass filter with an adequate time constant to smooth the values, something between 10 and 150 ms depending on how good you want to catch transients. You‘ll find a lot of about RMS calculation (dsp) on the web.
A common approach is to pass the signal through an envelope follower, choosing attack/release characteristics depending on the style of meter you want (or simply personal taste). Try an instant attack and a few hundred msec release for a start.
You don’t necessarily need to calculate RMS - that will give you an indicator of signal energy more like a VU meter.
Yes, I have been after the peak level meter. I thought by passing the value as I did would create the most accurate representation. Sounds as if I thought incorrectly.
The data is already being sent through an ADSR, Delay, and Reverb before returning values. So, I should be creating additional filters/envelopes for just the volume meter?
I thought about this but if I store the maximum value which is later replaced by a higher value, its not going to decline. It’s just going to keep getting pushed up, up, up.
I guess it would be possible to create a timer which resets the object to zero every so often.
To get some inspiration, I created an open source meter, that displays RMS and Max:
It takes care of smoothing the data (averaging over multiple buffers, configurable), atomic access from measuring in the audio thread and painting in the gui thread.
This is how I have created mine as of now, one gradient fill for the background and rectangles with 0% opacity fill for the outlines. I will then repeat for the right channel when am confident.
I should be using atomics, haven’t done that yet either.
I didn’t want to immediately change my technique so I tried a few other options by creating timer callbacks for the audio thread and the level meter component. I was able to get things into a comfortable range where the meter is responding appropriately. However, looking at the console I am getting nan and -inf values. This is definetly the reason for any flickering at this point.
Did you use the absolute value of your samples before calculating the logarithm? Btw with a timer you will miss transients, so your max won’t be max. Do it samplewise
I know this is an old topic, but I’ve checked out AudioDeviceManager::LevelMeter and was hoping you could elaborate on the decay factor N = 0.99992. How or why was this particular number chosen?