XCode v13 appeared to break 'DisableDenormalisedNumbers'

Just a heads up here… when compiling with Xcode 13 I experienced performance issues that looked very much like a denormals issue (processing time increasing dramatically for my FX algorithms as audio level tapers out) - I confirmed with some profiling code that for example my Reverb algorithm started to take over 10X longer than usual to process each block.
(Note that my audio processing code main process block uses the JUCE function to disable denormalised numbers support ‘disableDenormalisedNumberSupport()’ and I even confirmed it is being entered correctly and used the ‘areDenormalsDisabled()’ function to confirm it is set)

Interestingly this issue only appeared in release mode builds - and went away when I downgraded to Xcode 12.5.
I’m not sure if there is something different about the C++ compiler in Xcode13, but perhaps it was not correctly setting register settings (testing on Intel Mac btw).

I’m not sure how I could confirm registers are set correctly, and sorry that I don’t know if the compiler version is different in Xcode13 - I spent a week just debugging my code as I thought I’d introduced an issue… but a ‘quick’ (download took ages) download and the issue went away with Xcode 12.5 and exact same code base - not in a hurry to update again.

I was able to reproduce this in Xcode 13. In release builds, the MXCSR register is not being set correctly in FloatVectorOperations::setFpStatusRegister() so denormals aren’t getting disabled. This is because when any optimisations are turned on the fpsr_w flag is being optimised away. Adding the volatile keyword to that variable fixes things and I’ve pushed that change to develop here:

It’s interesting that this only occurs in Xcode 13 though. Presumably the version of Clang that is shipped with this version of Xcode has changed its optimisations somehow and that is affecting this. I’m going to dig into this a bit more and see if I can figure out what has changed.


Awesome… good to know I am not going insane :slight_smile:

1 Like

I did some more investigation on this and confirmed that the issue is only present in AppleClang (the modified version of clang that ships with Xcode) and not in vanilla LLVM clang. I’ve submitted a bug report to Apple and will update this thread if I hear back.


Thanks heaps for finding and fixing this! Good catch, this could have led to nightmarish debug sessions. It does make me wonder what else is broken in AppleClang/Xcode13 currently.

Thank you @ed95 and @wavesequencer for your time spent on this. I was just about to reimage a spare MacBookPro with Big Sur, and the current Xcode version presented in the App Store is 13.0 now. Would you say go with Xcode 13.0, or skip for now and manually download 12.5.1?


I’ve been using Xcode 13 for day-to-day development for a while now and haven’t noticed any other regressions, though this doesn’t inspire confidence. If you are a part of the Apple Developer program it’s possible to grab older Xcode releases from the download archive if you are concerned about stability.

Hi @ed95, was just wondering if there was any update back from Apple on this issue, and wonder what you would recommend.
Edit… sorry… I assume that your push to develop with the volatile keyword got included in the latest versions of JUCE? So it should be safe to upgrade to Xcode 13 in that case.

How would I even choose to use ‘vanilla LLVM clang’ - and would that be OK for M1 based Mac builds?

1 Like