roundToInt() issue

Hi,

I have some problems with showing some parts of my plugin’s gui correctly in specific build.
It seems that function roundToInt() makes the problems once again depending on build settings.

I’ve made some tests with the Juce Demo from the latest tip (module branch).
I changed in Juce Demo settings of Floating Point Model from /fp:precise to /fp:fast (Properties => C/C++ => Code Generation).
I added also new platform (x64) and compiled Release configuration.
The Juce Demo looks like this…

[attachment=0]juce_bug.jpg[/attachment]

All the missing stuff (eg. close button) ispresent but not visible.
Compiled on VS2008, juce tip from git (modules), Windows7, CoreQuad.
The same build (x64, Release) is running without any problems when FPU model is /fp:precise.

I’ve found some kind of solution here. In the juce_MathFunctions.h where roundToInt() is declared you added some pragma directives just before and after the function.
When you modify them in this way:

#if JUCE_MSVC
  #pragma optimize ("t", off)
  #pragma float_control (precise, on, push)
#endif

[...]
inline int roundToInt (const FloatType value) noexcept
[...]

#if JUCE_MSVC
  #pragma optimize ("", on)  // resets optimisations to the project defaults
  #pragma float_control (pop)
#endif

then /fp:fast don’t make any problems in x64 Release build.
What it does is just switching the model to precise before the roundToInt and restoring the previous settings just after.

BTW. the issues with roundToInt() implementation seems to come back over and over again. And they have been always connected with CPU optimisation settings, or FPU model etc.
I know that this kind of implementation was supposed to make the things faster, but now we have SSE2 for example on almost every CPU and normal casting (int) (…) is made with using of SSE2 instructions which are very efficient.
I’ve made some dirty tests with replacing current implementation with something very ordinary (I know) just for checking…

  #define round(d)          ( (int) ((d) >= 0 ? (d)+0.5 : (d)-0.5) )

and with using SSE2 it seems to be about 10% faster on my machine.
Current implementation relies on floating variable representation in memory and some dirty trick on it. I’m afraid it’s not the safest especially when it makes some conflicts from time to time.

What do you think?

Cheers,
Przemek

Thanks for the heads-up on that pragma stuff - you seem to have found the setting that I was looking for but couldn’t find! I guess the optimize “t” stuff isn’t necessary if you use the float_control setting.

When I originally wrote that code, years ago, it was certainly the fastest way to do the conversion, but since then CPUs and compilers have moved a lot, so you’re right: it may be time to retire it…

Yes I think it’s necessary too. You added them after this post http://www.rawmaterialsoftware.com/viewtopic.php?f=3&t=7965
But there was a problem with /O1, /O2 switches which ‘optimize (“t”, off)’ has nothing to do with. It turns “Favour fast code” off for a while but I don’t remember that I ever encountered any problems with “Favour fast code” optimizations.

Yes, I know that this code was the best solution few years ago. Normal casting was terribly slow that days but now I would be more afraid about possible problems it can leads to than possible speed up it can gives sometimes.
I agree that it’s the time to retire it and replace with something more safe and flexible.

BTW. to check this I’ve got the latest modules branch and it’s look very good. Very good job Jules!