Can anyone help me out with how to actually calculate the LUFS/LKFS value of a signal? I can’t find any sources anywhere online and no textbooks seem to mention it.
If you want to go one step beyond, there is also R128 at the European Broadcast Union (EBU) for loudness normalisation, and the measurement algorithm is available from @samuel () at https://github.com/klangfreund/LUFSMeter.
(and also as plugin to buy: https://www.klangfreund.com/lufsmeter/) - highly recommended
This is great, thanks both!
Haven’t mentioned it here yet:
I’ve changed the license from GPL2 to MIT a few month ago. Feel free to use it for whatever you want.
The standard defines the filtering through coefficients at a fixed sample rate, which is rather silly IMO. If you need to support multiple sample rates, here are filter parameters that should meet the requirements (that’s how I’ve implemented it at least):
Pre-filter (high pass)
cutOffFrequency = 37.5 Hz
Q = 0.5
RLB (high shelf)
crossOverFrequency = 1500 Hz
gain = 4.0 dB
Both filters are 2nd order Butterworth filters. De-cramping should not be necessary for normal sample rates.
Thank you @stian.
Yeah, it is silly to specify a filter by only providing the coefficients for the fixed sample rate of 48kHz! This actually lead to implementations of ITU BS.1770 where resampling to 48kHz is used before the filter stage.
If someone ever needs a 2nd order IIR filter defined by the coefficients for a given sample rate (Currently 48kHz is hardcoded, but it’s easy to change), take a look at:
The documentation of the derivation is provided as well:
Thank you for this algorithm.
I’ve compiled your code and compare the results with your plugin and my compiled binary. I’ve gotten different results with the same audio files.
My test environment is like that:
- I use AAX version with PT 2018.12.0
- I compared the values in audio suite mode
- I use OSX as operating system.
My question is that does your plugin and the github repo have the same algorithm?
My plugin and the published code should give you the same measurement results. Internally, overlapping blocks of audio are analysed. Every 100ms a new block starts. In my implementation, the first block starts when an instance gets created or when you reset a measurement. It is quite likely that these blocks are not 100% aligned on two measurements of the same file, which can lead to slightly different measurement results. That’s according to the specification and also the reason why there is always a tolerance provided for the measurement results of test files.
The available source code and the code used in my plugins are not 100% identical, but they should yield the same results. I’ve tested both with the test-files by EBU and ITU. Links to these test files can be found in the LUFS Meter manual.
Thank you for the reply. I’ll make more tests and try to change the block size (now, I feed 512-sample at a time).
I was referring to the ‘gating blocks’ as specified in ITU-R BS.1770, not audio buffer sizes. The results should not change when you alter the buffer size.