I wrote a few things that work in generic C++ and that are not bound to any specific framework like JUCE or WDL. A few helpful classes like an “intelligent” SampleBuffer similar to AudioBuffer, but with optimized manipulation (superfast sample shifting for FIR delay lines etc.) methods, implemented a few filters, things like that.
I’ve been tinkering around with my independent framework in WDL for a while, simply because it compiles a lot quicker than JUCE here. But the development of WDL is so volatile, so inconsistent and untransparent, I’m not sure I’d want to put up with that for serious work.
So I included my framework into a blank JUCE audio plugin project and got going.
Since JUCE’s AudioBuffer is float based, but my “intelligent” SampleBuffer is double based (because WDL uses double), I wrote a set of FOR loops to import/export (by static_casting) between AudioBuffer/float and SampleBuffer/double.
Apart from per-sample static_casting float to double to import, and later static_casting double to float to export from my SampleBuffer back to JUCE AudioBuffer, the processBlock of my JUCE plugin is absolutely identical (!) to the processDoubleReplacing in my WDL plugin.
The JUCE project includes the identical files from the identical locations on my hard drive that the WDL project accesses. Both JUCE and WDL projects call the same filtering functions, the same clipping functions, the same stuffing and decimation functions, all with the samples in double format, all processing done in my own SampleBuffer class.
The only difference, again, is that in JUCE I have to cast from float to double before the actual processing, and I have to cast from double to float after the actual processing.
And yet, the WDL plugin generates a 100% pure output signal, but the JUCE plugin generates some sort of weird static or noise, plus there seem to be differences of several dB in the filtering. (Using identical filter code.)
Below are two SPAN screenshots of running an external 900 Hz sine through a simple oversampled clipper. Zero-stuff, filter, +20 dB gain, soft clip, -8 dB gain, filter, decimate. Same oversampling amount, same oversampling filter arrangement, identical conditions inside both processing blocks. I didn’t adapt or change anything to work with JUCE or WDL specific classes, it’s all the same independent C++ code that would work in any other surrounding as well.
See how on the JUCE one (upper) there’s some sort of noise at the bottom of the spectrum, but it’s totally clean on the WDL shot (lower)?
See how the on JUCE one, the two right-most spikes are at roughly -122 and -138 dBfs, but on the WDL one they’re at -116 and -130? (If the scale to the right is cut off on the WDL shot, right-click and “view image”)
(Before you ask: no, they’re not jumping and moving about. They keep their levels absolutely consistently in both plugins.)
So… how can that be? Can there really be such a significant difference in precision between float and double?
I’ve read that on 64-bit systems, real-time processing of doubles will probably be the more performant method anyway, since doubles are “native” to the architecture, and floats will be just stored using double memory size but have to be truncated first, i.e. they don’t make best use of available memory, plus they require additional handling, potentially even slowing processing down?
I don’t know if that’s just a load of old codswallop or if there’s anything to it, but it’s somewhere on StackOverflow.
But if there’s actually something to it, then why is JUCE float based and not double?
Just for backwards compatibility with non-64 bit systems? Or to be able to run on more “primitive” devices like integrated Linux boxes?
Is there any (simple) way of making a JUCE project double based instead of float “natively”, i.e. without having to hack JUCE module code? Just to check if that actually changes anything?