32/64-bit models


#1

Hi Jules,

Just a suggestion, the functions of Juce use int and unsigned, these types are always 32-bits even compiled for 64-bit, wouldn’t be a good idea to use normalized and portable types?
For example “Int” and “UInt”:

#if JUCE_64BITS typedef int64 Int typedef uint64 UInt #else typedef int Int typedef unsigned UInt #endif


upload pictures

Of course you can always use predefined sizes for your data structs, etc; int32, uint64, int8…
I do this in all my libs and it works perfectly, just recompile and your 32-bit code is now 64-bit and viceversa.

Cheers


#2

I don’t understand what problem you think this would solve?

Wherever I’ve needed a specific size, then I’ve used int32, int64, etc. Otherwise, it’s perfectly ok to use “int”. That’s why it exists. What would be the point of using a typedef?


#3

[quote=“jules”]I don’t understand what problem you think this would solve?

Wherever I’ve needed a specific size, then I’ve used int32, int64, etc. Otherwise, it’s perfectly ok to use “int”. That’s why it exists. What would be the point of using a typedef?[/quote]

Who said that there is a problem? A 64-bit processor can handle 32-bit, but if you have a 64-bit processor why not to use it? It can compute 64-bit types as fast as 32 so where is the benefit of losing 32 bits of resolution?
When arrived the 32 bits processors like 386 etc, did you keep using 16-bit types?


#4

Good luck calling malloc (int64 (pow (2, 63)-1));


#5

Years ago your reply would be: “Good luck calling malloc(int32(pow(2, 31)-1))” :smiley:


#6

LOL true.


#7

LOL true.[/quote]

Dejavu! :smiley:
Some years ago I would say “Bah! 32 bits and 4GB of RAM is enough for any app”. But in the last 18 months I worked in two projects that use 16 and 24GB of RAM. I used my libs without any modification and zero problems because I coded them using portable types, if not, I’d probably need several weeks to port them to 64 bits + testing.
So, in my experience, to make the types portables is the way to go. 64-bit computing is here to stay…


#8

I already use larger types in places where there’s a chance that it might be important. For example, size_t for indexes, int64 for stream positions, etc.

But in the vast majority of code, the size of an integer doesn’t matter. Are you suggesting that I should complicate code like this:

for (int i = 0; i < 100; ++i) doSomething (i);

…by using non-standard integer types?


#9

Yes, making a mix of 32 and 64-bit types, a really nice source of very subtle potential bugs.

[quote=“jules”]But in the vast majority of code, the size of an integer doesn’t matter. Are you suggesting that I should complicate code like this:

for (int i = 0; i < 100; ++i) doSomething (i);
[/quote]

Yes, it does matter! If you are thinking that the only problem you can encounter using 32-bit types in a 64-bit app is when you use large arrays of data then you are totally wrong.
There are tons of cases where a code will work on 32-bit and it will crash on 64, just a couple examples:

int x = -2; 
unsigned y = 1; 
int d[10] = { 1, 2, 3, 4, 5, ... }; 
int *p = d + 3; 
p = p + (x + y);    // Invalid pointer on 64-bit platform 
printf("%d\n", *p); // Access violation on 64-bit 
    

// Another nice crash on 64-bit     
const size_t s = (...)
char *d = (...)
char *e = d + s;
for(unsigned i = 0; i != s; i++)
{
  const int x = 1;
  e[-i - x] = 0;
}

And cases where your code work OK in debug mode and crash in release (or viceversa) because the compiler decided to optimize a 32-bit var and handle it as 64-bit, etc, etc.
The best solution? Make the size of integers coincide with the size of the machine word; to use 32-bit integer types in 32-bit apps, and 64-bit integers in 64-bit apps. Safe, portable and the best performance.

What? Are “int” and “long long” non-standard integer types???

Today when you buy a new computer you can be sure it has a 64-bit processor, to keep using 32-bit types (“int”) as your base integer for both platforms is a very bad idea (and 128-bit processors are around the corner too…).
But hey! it’s your library! I only gave you a suggestion with my best intention.

PD: If you wait for the standard C/C++ comitee to solve this platform tutifruti first get a very confortable chair! Those guys can easily spend years debating if “nullptr” was acepted as a new keyword!


#10

[quote] to keep using 32-bit types (“int”) as your base integer for both platforms is a very bad idea (and 128-bit processors are around the corner too…).
But hey! it’s your library! I only gave you a suggestion with my best intention. [/quote]

I think you’re hugely over-estimating the problem here! Perhaps you’ve had a bad experience with this, but AFAICT it’s a non-issue!

Like I said, in places where size matters, I specify the size. In the vast majority of code, you just need a type that’s at least 32 bits (but could be bigger without causing any trouble), and there I prefer something like “int” because it’s simple and readable.

Your examples of problems are all about mixing types with different signedness or size, and in those situations you need to be super-careful anyway, and should always use a high warning level to catch mistakes. But using typedefs wouldn’t make that any easier (actually it’d just complicate things IMHO).