I would guess that's just how it happens to go in your test case for whatever reason...This is from a 64 bit build, right...? Did you run the test multiple times? Debug or release build?
Incidentally, you probably shouldn't even be "new":ing Juce::Images, they are themselves internally wrappers to shared heap allocated data already, so why add yet another level of indirection in your code...?
No since that would be Java or C# syntax, and any decent C++ compiler would spit an error.
I wouldn't be surprised if printf() is broken somehow. A cast to a valid integer type to represent the address (e.g.: juce::pointer_sized_int) would confirm if that's true.
Casting to a big integer type is how I first found it.
I made a small function to check pointer.... a bit of warning.. this is hackish ;)
In a normal Release build this will be just a null-check but in Debug some more where I set breakpoints and stuff.
//this is by no means watertight or even close but might catch a few percent of potential pointer bugs.
bool okPointer(void* p) {
if(p != nullptr) {
#ifdef DEBUG_BUILD
uint64_t address = reinterpret_cast<uint64_t>(p);
// 0x610000c00570
if(address > 0x400000000) {
DBUG(("WARNING, strangely large pointer value %p", p));
return false;
}
if(address < 0x000001000) {
DBUG(("WARNING, strangely small pointer value %p", p));
return false;
}
#endif
return true;
}
return false;
}
1. Never create a new Image! it's a by-value class, you never need to allocate it on the heap.
2. Never worry about the physical numbers that your pointers have! It doesn't matter, let the compiler worry about this. On 64-bit systems allocators use all kinds of tricks to place blocks of memory, but it's not your problem.