What is faster: global float or #define

Hello,
as I understand if program use in the runtime some variables or const “variables”, it always need to jump to some place in memory. And as I know it always take some time to find and read that memory block.
So I wonder if isn’t it faster when we use #define ?
As I understand #define makes alias to rvalue. So for example if I define something like that:
#define SOME_VALUE 10.0f
Then for program it doesn’t matter if I use SOME_VALUE or 10.0f. For program it’s exactly the same. So it doesn’t need to jump to any memory. So my theory is that it should be faster.

Am I right?
If yes, then why programmer dissuade to use #define macro?

The compiler would be really stupid if it didn’t understand a const variable couldn’t be directly inserted into the machine code in many cases. Maybe without optimizations enabled that might happen.

Anyway if you are actually really interested about this, why not test and benchmark it yourself?

You are right I will try that

When compiled with the clang compiler, the generated machine code is the same when either a global const float or a define is used. (With a simple test case…) The float ends up in memory and needs to be accessed from there during runtime. In practice it doesn’t matter because the CPU has memory caches… edit : the generated machine code is also same when the value of the float is directly written in the source code instead of using the global variable or the macro.

Because once you use macros for simple constants in your code, you are soon tempted to use them as little “functions” (like the infamous MIN and MAX macros in old code bases) and more complicated things and you end up with a mess.

1 Like