Float vs double?

the JUCE library is using floats almost everywhere in the code. What is the reason behind that?

Are floats faster than doubles, even on modern computers? Otherwise I would have thought that it would be better (higher precision) to use doubles.

Why would it be better to use doubles? What needs such high precision?

Well, my thinking is: higher precision is always better than lower precision. As long as there is no performance penalty.

I guess, you cannot hear the difference between an audio stream encoded as floats vs doubles. But if you want to perform mathematical calculations on the stream, there are cases, where doubles give you better results. You can of course typecast to double before doing the calculations.

In general floats are faster, mainly because by using half as much space, they provide better cache performance if you have arrays of them. And they're probably faster to pipeline on many CPUs, although this will vary.

Basically: if something doesn't need double precision, then floats are a better choice, because they may be faster, and will never be slower. Or use a template and keep your options open.

If you're talking about audio context, additionally as a lot of the AudioBuffer code is vectorised (and you should be using the FloatVectorOperations in your own code too) *most common architectures* can process 4 floats at a time and only 2 doubles. Take a look at _mm_mul_sd and _mm_mul_ps for example

** A discalmer here that this is very architecture dependant. For example Intel Xeons can process 8 floats or 4 doubles IIRC

Thanks for the informations :-)

http://docs.oracle.com/cd/E19422-01/819-3693/ncg_goldberg.html

Gives a good idea of how to estimate required precision as well as the primary error sources.

Back in the dark ages, say late 70's and early 80's, the rule of thumb was, if you absoutely can't use integers, go for a double, because you are incurring most the software overhead anyway. But with modern platforms the rules have changed. Not too long ago I showed a young fellow transitioning from firmware to desktop programming that his knee jerk use of shifts for division and multiplication by 2s actually resulted in slower code than if he had just used floats throughout on the target platform.

Jule's expanation of his reasoning for floats above is pretty sound, and basically the reasoning that was used at Apple when so many of their services were defined.