ScopedNoDenormals issue on Mac but not on PC?


I’ve got a plug-in that generates denormals, and I want to disable them. On Windows, using the ScopedNoDenormals object improves performance greatly. However, the math problems still continue to exist with this plugin in OSX.

I did an experiment, using a test I found in another thread (“ScopedNoDenormals issue”) on this forum from a few months ago. I ran this code:

   float verysmall = std::numeric_limits<float>::min(); // min does not include denormal numbers!
   verysmall *= 0.1f; // force number into the denormal range
   if (verysmall != 0.f)
      printf("Denormals detected!\n");

…before and after creating a ScopedNoDenormals object. In Windows, the program outputs “Denormals detected!” before the ScopedNoDenormals object is created, but afterwards it does not. In addition, I checked the _mm_getcsr() flag to ensure that the DAZ and FZ bits have been set, and they have.

On OSX, running the same code, I still show that the DAZ and FZ bits have been set, but the math test fails both before and after ScopedNoDenormals is called. I tested this on both an Intel Core i7 from 2012 running iOS 10.13.3, and an Intel Xeon from 2010, and I got the same results on both computers. The plug-in is being compiled to support OSX 10.9 and up.

Does anyone know why this could be happening? Is it possible that there’s some Jucer flag that’s causing the plug-in not to use SSE instructions, or to be compiled in some sub-optimal way? Can anyone on this forum run the same test on their Mac and tell me if they see different results?



I’ve added a second test:

   float verysmall = std::numeric_limits<float>::denorm_min();
   if (verysmall != 0)
      printf("Denormals detected #2!");

Along the way I discovered something very interesting that I cannot explain. I was running a debug build in Windows. When I added this second test, I got the same results: Both tests failed before ScopedNoDenormals, and both succeeded afterwards. However, when I compiled in Release mode, both tests failed both before and after ScopedNoDenormals.

In OSX, I was compiling a Release build, so I decided to test a Debug build. Incredibly, in a debug build, the two tests succeed on the Mac after creating the ScopedNoDenormals object.

So, this is NOT a Mac vs PC problem. This is a Debug vs Release problem! I’m truly baffled. Any advice? Why would denormals be successfully removed in a Debug build but not in a Release build?



I’d guess in release mode with full optimization the compilers totally remove the test logic by optimizing unnecessary calculation and don’t know about denormals. The solution would be a better test that cannot be solved by the compiler.


Why would denormals be successfully removed in a Debug build but not in a Release build?

My guess is the compiler sees that the object instance isn’t used after construction and sees it as unnecessary. Check where the object is instantiated in the release build and see if the constructor is ever called.


A better check is to use fpclassify() in a spot where you expect denormals to happen (such as an exponential decay).


For the record, and to close this post, I just want to add that in both Debug & Release mode, I’ve been able to verify that denormals are in fact flushed to zero in actual mathematical calculations. So I was just thrown by these tests working in Debug but failing in Release. Thanks for the bandwidth.