How does JUCE_UNDENORMALISE macro works?

Hi,

I tried the #define JUCE_UNDENORMALISE(x) { (x) += 0.1f; (x) -= 0.1f; } macro to handle denormals in various context (C++ and WebAssembly) using it in recursive computations. In C++ it works when compiled without -ffast-math (I guess the +0.1f followed by -0.1 gets optimized out with -ffast-math). It works also when used in a WebAssemmbly context

But the question is: how does this works in the first place ?

Thanks.

(PS : We use the #define AVOIDDENORMALS _mm_setcsr(_mm_getcsr() | 0x8040) CPU configuration trick on Intel to force FTZ, but this is not the point of the question…)

Checkout this:

this is what we’ve used lately and looks good on our test with Intel based machine (I admit I didn’t test AMDs).
Keep in mind some formats (AU and AAX) should turn the flags already in their code. but from my tests. that didn’t hurt anything.

TL;DR

Be aware that there CAN be issues with setting and forgetting the flags; they should always be restored to the state you found them in if you’re mucking with them in the render thread.