I’ve had situations where I’ve had a floating point value moving towards a target. Clamping is applied so it never exceeds the target value. In this case comparing a float to the target value is fine and you wouldn’t want an approximate check because it’d fall short.
However, arbitrarily doing arithmetic on a float then depending on it hitting a specific value is asking for trouble. Although on the basis that programmers should know how floats work I’m not sure why anyone would do this.
Yeah, from reading the first article I think that’s why a ULP-based approximatelyEqual method argument in JUCE would be useful, though I guess one could argue that the “disadvantage” discussed in the article might present itself if developers didn’t understand how to use it properly. Used properly however, it could allow for some very powerful approaches to signal output analysis, especially if you could leverage higher precision comparisons in double-based and long double-based signal computations vs float. It might make it hard to provide a single appropriate ULP default that would make sense for all templated types however.
The most surprising thing to me about float comparisons is that the FPU x87 registers have 80-bits so can store a higher precisions. IIUC this means that a float that has been processed in x87 registers, then stored back to memory effectively looses precision (though not in terms of what would be expected).
But, if you then do the same calculation with a new set of numbers, all in x87 registers, then compare this with your previous result, as the previously stored number doesn’t magically regain its precision, they could be different bit patterns. en.wikipedia.org/wiki/X87