Identical code but different output in WDL and JUCE

I wrote a few things that work in generic C++ and that are not bound to any specific framework like JUCE or WDL. A few helpful classes like an “intelligent” SampleBuffer similar to AudioBuffer, but with optimized manipulation (superfast sample shifting for FIR delay lines etc.) methods, implemented a few filters, things like that.

I’ve been tinkering around with my independent framework in WDL for a while, simply because it compiles a lot quicker than JUCE here. But the development of WDL is so volatile, so inconsistent and untransparent, I’m not sure I’d want to put up with that for serious work.
So I included my framework into a blank JUCE audio plugin project and got going.

Since JUCE’s AudioBuffer is float based, but my “intelligent” SampleBuffer is double based (because WDL uses double), I wrote a set of FOR loops to import/export (by static_casting) between AudioBuffer/float and SampleBuffer/double.

Apart from per-sample static_casting float to double to import, and later static_casting double to float to export from my SampleBuffer back to JUCE AudioBuffer, the processBlock of my JUCE plugin is absolutely identical (!) to the processDoubleReplacing in my WDL plugin.

The JUCE project includes the identical files from the identical locations on my hard drive that the WDL project accesses. Both JUCE and WDL projects call the same filtering functions, the same clipping functions, the same stuffing and decimation functions, all with the samples in double format, all processing done in my own SampleBuffer class.

The only difference, again, is that in JUCE I have to cast from float to double before the actual processing, and I have to cast from double to float after the actual processing.

And yet, the WDL plugin generates a 100% pure output signal, but the JUCE plugin generates some sort of weird static or noise, plus there seem to be differences of several dB in the filtering. (Using identical filter code.)

Below are two SPAN screenshots of running an external 900 Hz sine through a simple oversampled clipper. Zero-stuff, filter, +20 dB gain, soft clip, -8 dB gain, filter, decimate. Same oversampling amount, same oversampling filter arrangement, identical conditions inside both processing blocks. I didn’t adapt or change anything to work with JUCE or WDL specific classes, it’s all the same independent C++ code that would work in any other surrounding as well.

JUCE:
spectrum-juce

WDL:
spectrum-wdl

See how on the JUCE one (upper) there’s some sort of noise at the bottom of the spectrum, but it’s totally clean on the WDL shot (lower)?

See how the on JUCE one, the two right-most spikes are at roughly -122 and -138 dBfs, but on the WDL one they’re at -116 and -130? (If the scale to the right is cut off on the WDL shot, right-click and “view image”)

(Before you ask: no, they’re not jumping and moving about. They keep their levels absolutely consistently in both plugins.)

So… how can that be? Can there really be such a significant difference in precision between float and double?

I’ve read that on 64-bit systems, real-time processing of doubles will probably be the more performant method anyway, since doubles are “native” to the architecture, and floats will be just stored using double memory size but have to be truncated first, i.e. they don’t make best use of available memory, plus they require additional handling, potentially even slowing processing down?
I don’t know if that’s just a load of old codswallop or if there’s anything to it, but it’s somewhere on StackOverflow.

But if there’s actually something to it, then why is JUCE float based and not double?
Just for backwards compatibility with non-64 bit systems? Or to be able to run on more “primitive” devices like integrated Linux boxes?

Is there any (simple) way of making a JUCE project double based instead of float “natively”, i.e. without having to hack JUCE module code? Just to check if that actually changes anything?

JUCE’s AudioBuffer has been a template that also works with doubles for quite some time now. There was also support added for doubles processing in some of the other audio processing parts, for example plugins can implement a doubles based audio processing method. (But doubles support is not everywhere, which is a bit annoyingly inconsistent…)

Yes, I know. Before I started playing in WDL, I used JUCE a lot instead. Back then, I would just create a double AudioBuffer and FOR-loop from AudioBuffer/float to AudioBuffer/double and back later. So I’ve used AudioBuffer/double before.

But if I just update the processBlock definition in PluginProcessor.h and PluginProcessor.cpp from AudioProcessor/float to AudioProcessor/double, I get a failed build with error messages, which usually means I have to dig deeper and change module code.

Is there not a way to “natively” receive doubles and output doubles, without them ever being truncated to floats somewhere and (possibly, if that really is the cause) generating this weird static/noise and those filter discrepancies?

I want to avoid “hacking” around in the JUCE modules, since every update would then endanger my modifications…

What version of JUCE are you on? AudioProcessor has (in JUCE 5.3 at least) :

virtual bool supportsDoublePrecisionProcessing () const    
virtual void processBlock (AudioBuffer< double > &buffer, MidiBuffer &midiMessages)
1 Like

I downloaded and installed it yesterday. The code in the screenshot below was generated from scratch by Projucer no 24 hours ago.

(My project @ screenshot is called “Rediscovering”, since that’s what I’m currently doing with JUCE)

The Projucer does not by default generate code for the doubles processing stuff. The processing method for floats is mandatory to have, the doubles method can be optionally left out.

1 Like

Ah, okay, I see, the AudioProcessor contains both a float and a double based processBlock method.

You say the float one is mandatory, that’s OK as long as I’m not forced to use it.
Do you know if/how I can get a Projucer created project to use the double based processBlock instead? I quickly scanned the comments in the juce_AudioProcessor.h file, but they don’t seem to mention that. (Or I didn’t find it yet.)

That’s not part of the Projucer settings. You have to add the needed methods manually into the code. You are basically forced to implement the floats processing method too because not all hosts support 64 bit floats processing…(But obviously you can implement it by converting samples and so on…)

Okay, I see, I totally missed the “override” after the definition in the PluginProcessor.h file, that would’ve given me a clue.

Hm, but now that I’ve implemented the processBlock override with double, Reaper still uses the processBlock with float instead. And Reaper can definitely do 64-bit doubles, it works with the WDL plugin just fine.

I noticed the ‘supportsDoublePrecisionProcessing’ function in your screenshot a few posts up. Am I right in assuming that I have to somewhere create that logical fork, “if can do double, use the double method, if not then use the float method”? If so … where? :slight_smile:

(Thank you kindly for your patience and helpful replies, by the way! Much appreciated!)

You need to return true from your overridden supportsDoublePrecisionProcessing method. I think things will then work so that if the host supports 64 bit processing, the 64 bit processing method is called in the plugin, otherwise the 32 bit processing method is called.

1 Like

Yes! That did the trick!

There are still slight differences compared to the WDL spectrum, but I think I like the JUCE (with doubles) spectrum even better.

49

Thank you so much for your help, this will give so much peace to my OCD-ridden mind.
You’ve made my week! :slight_smile:

1 Like

not sure why there should be a difference. graphs are great but those “unpure” bits are rather quiet!

Yes, definitely, at -120 dB or so they’re rather quiet. By themselves. At a few dB gain before a clipper. But if you’re dealing with +100 dB gain or more, looking at high-gain amp sims or weird distortion effects here, these can quite easily mess with the audible spectrum, especially when used on several tracks.
Clipping on the drums, clipping on the vocals, clipping on several guitars, clipping on the bass, clipping on the master, distorting the guitars, distorting the bass, etc. (Not saying this is good mixing practice, but that’s what the kids tend to do these days. Thank you, Joey Sturgis.)
Having a bunch of artefacts on each track would contribute to a noticeable change in timbre/tone, methinks. I’d rather not have them.

While the solutions here are good, nobody’s explained why this is happening.

I’m pretty positive the reason this is occurring is because of the differences in floating-point signal-to-noise ratio (SNR) between your 32-bit and 64-bit signals. This is further backed up by the fact that using native JUCE 64-bit processing cleared it up and made it more like WDL, which I presume is also 64-bit insofar as possible.

Using the formula for floating-point SNR, 32-bit floats will have an SNR threshold of 138.46 dB and 64-bit doubles will have a threshold of 313.04 dB. As a slight aside, both of these are below the threshold of hearing at sane amplifier levels, which is where the “doubles sound better than floats” pseudoscience some audio engineers swear by falls apart (that said, however, it can make a difference if you are doing some sort of super long IIR filter where the feedback could get out of hand or you’re doing extreme amplification).

The tiny peaks in your spectograms are just that, extreme low-level noise brought about by differing floating point accuracies. The peaks are more pronounced at higher dB levels due to the differing SNR levels.

I’m mostly self-taught in DSP, so if I got anything wrong someone more knowledgeable please correct me.

I also think this is wrong

A 64 bit CPU is capable of transferring a 64 bit value from one internal functional to another unit in one clock cycle, as there are 64 electrical „wires“ between every functional unit. A 32 bit CPU would need at least two cycles to transfer a double value as it needs to transfer it in two halfs, while a float could be transferred in one cycle. But the other way round, transferring a float on a 64 bit architecture doesn’t take more cycles than the double, it just doesn’t use all resources available. So a lot of operations are just equally fast, „native“ double math is not faster.
Now this doesn’t have to do anything with memory efficiency. A float vector will be tightly packed in memory just as a double vector, so in terms of memory usage float is even more efficient.
Then there is the special case of SIMD vector instructions, which typically will be used a lot with DSP code. For this, there are functional units in a CPU that have the ability to load more than 64 bit in parallel and then do the same operation on all values (for an example multiply all by 0.5). For an example, a 256 bit SIMD register could hold 4 doubles or 8 floats. The CPU then could do a multiplication on all values at once. So using float would lead to the computation of 4 doubles or 8 floats at the same time. Now also in this case float is even more efficient compared to double.

Note that this is a bit superficial, but I think you get the point :wink:

Just to show I’m not talking totally out of my backside :slight_smile: here’s one of the sources I was referring to. There were others, since I tend to temporarily obsess about topics until I’m fairly certain why I’m sticking with one method (but then forget later), but this is one of the first I found.


And the one it’s supposedly duplicating:

The correct answer for any performance question is “don’t assume anything, always measure it”.

But as a general rule of thumb for audio-type work, it’s pretty safe to assume that floats are probably faster, and never slower than doubles. The compiler will usually vectorise them into fewer operations, and when you’re reading float data from memory, it’ll be twice as fast. There could be edge-case situations where doubles are better, but it’s probably unlikely in audio-type code situations

2 Likes