How do I set audio output bit depth, or why can't I?


Years ago someone asked the same question. The answer at the time was that JUCE autoselects the best format for the relevant audio device. There’s still no setBitDepth.

I know my external usb dac/amp works with 24 bit data at best, but if I recall from some ALSA experimentation it ‘accepts’ 32bit floats, which means the truncating would happen somehow (I’d rather do it myself) or there’d be unnecessary downconversion. What does JUCE do here and what should I?


TBH you can just ignore the bit depth. Only a tiny number of audio APIs actually let you change it, and even fewer devices support different depths. The JUCE classes will always use the highest bit depth if possible, and since you send the data as floats anyway it shouldn’t matter. (And nobody can actually tell the difference, of course…)


I can understand the usefulness of floats for processing, but how are they truncated and how do I control dithering?


This stuff very rarely matters. What’s you actual reason for wanting to do this?


I want a synthesis/composition program for me. Dithering most certainly matters for writing audio files, but then bit perfect music players need to know the bit depth of the sound card too, don’t they?

Again, how are floats truncated?

Thanks for trying to help.


No, why would they?

The conversion either happens in the audio driver, and almost all drivers will probably just truncate to 16/24 bits. In juce code where we have to do this, we also just truncate it, because no human can tell the difference.

If you want to add extra dithering, you can do that yourself before writing the buffers, but honestly, it’s a waste of effort.


The only processing an audio driver should be doing in a pro audio setting is either mixing (conversion is necessary if the inputs are different bit rates or depths) or uncompressing a lossless audio stream. What does bit perfect mean to you?


How are floats truncated? Do I need to care about endianness? From a performance perspective now, knowing which bits are unused is great for minimizing branching in loops (phasers, for example), and critical for getting fixed point or integer-like processing right (correctness perspective).


Seriously, none of the things you’re asking about matter at all.

Just focus on writing good pure-floating-point algorithms and don’t waste your time thinking about implementation details below the driver level.


So is your answer that floats are 32 bit IEEE-754 and the lowest 8 or 16 logical bits in the significand are truncated? Is this how JUCE handles it?

EDIT: Sorry, not 752. Significand, not mantissa.


I will add that Csound doesn’t emphasize the dithering strategy either.


Nope. My answer is that I couldn’t give a rat’s arse about how floats get truncated. And I’ve never heard of an expert in the field ever suggesting it was of the slightest importance.

In the places where JUCE does it, we use plain old C++ float-to-int conversion, which is probably best defined as “whatever your compiler and CPU happen to do”.