No, you're right - and I actually changed that code a couple of hours ago :)
But if your code is so sensitive that it would fail if given exactly 1.0, then you should be doing your own sanity-checking of the return value anyway. Even now that it uses a divisor of 0x100000000, I wouldn't be totally sure that some kind of compiler/floating-point mode situation wouldn't cause the return value to be rounded up to 1.0.
I hope that kind of situation can not occurs. Many codes seems based on that trick. That's the case for instance in the original Mersenne Twister implementation ( http://www.math.sci.hiroshima-u.ac.jp/~m-mat/MT/MT2002/CODES/mt19937ar.c ).
But to be honest i'm not totally sure either. In my code i use the (1.0 / 4294967296.0) version as i guess that
Makoto Matsumoto and Takuji Nishimura are more qualified than me ;-)
Well, do bear in mind that floating-point equality is actually a very murky subject, and CPU modes and compiler optimisations can make a big difference to the values.
TBH if you've got some downstream code that definitely can't cope with 1.0, then I'd say you need to either make that code more robust, or give yourself a safety margin, e.g. Random::nextDouble() * 0.999999.