A few more tiny comments!
Perhaps 5 years is an exaggeration, though it took me a lot longer than that but now you have a ton of good, authoritative references - I keep mentioning Scott Meyer, but his stuff is really readable and fun.
But I essentially agree. You could be writing really nice code in six months with Python, for example (if you could already program). Java, well, that takes quite a lot more time to really master… (I assume Ruby is more similar to Python).
Well, I don’t know if you’re right about that… but I can’t really comment!
Regarding platform-dependence - this is an issue for me since I’m trying to develop for the Mac and PC. I couldn’t find any web references for this…
I went through the SGI STL source (which I have floating around) and saw nothing platform-dependent in the strings or really, much of anywhere except for threads.
There is no “standard” STL implementation as far as I know. If there were any, it’d be the SGI, but I just checked my XCode path and it appears to be a GCC implementation (which was too complex for me to really see if it were platform-dependent or not).
A* getUnchecked(unsigned a)
if (a > (unsigned)size) return 0; // Even if passing -1 to the function, it get converted to a value (0xFFFFFFFF) > INT_MAX which will be > size if size is signed
First, that code isn’t actually right - it’ll fail if the size of your array is greater than INT_MAX. (You’re claiming that your size is signed, but why? If you’re using unsigned, surely “sizes” are one good place for them?) You use In the case of 32 bit integers, this is unlikely, but people do sometimes make use of massive byte arrays for example. However, if you believe this pattern is correct, you’d be tempted to use it for shorts… or characters…
Using unsigned like this is a trick. It saves a tiny amount of processor time, at the expense of readability.
But I have to say that I have no idea why you’d want to write this method at all. Returning NULL is just the worst possible thing you can do because it causes a problem some indeterminate time later. Putting all this range checking in each array access is not only overhead but prevents the compiler from optimizing in some cases.
In my mind there are two possible classes of errors - data errors and programming errors. You as the programmer should assume your data is always bad and clean up indetectably or at least never get into a bad state. But trying to access an array outside of its bounds is a programmer error and at that point the state of your program could be anything at all…
So I’d do pretty well what Jules does in JUCE - which is to have a debug-only time check which throws an exception if there’s an array out of bounds access, and perhaps does nothing at all in optimized mode. Fail fast, fail early. If your code is exception-safe, it can catch that exception at a much higher level and either retry the operation, or report this bug to you.
The other things that’s key is whether pointers are nullable or not. The idea that a NULL pointer could mean either a deliberate “pointer left empty” or a mistaken “oops, I goofed with my array arithmetic somewhere” is a generally bad idea and encourages the writing a lot of spurious error handling code that is simply never executed.
Frankly, I rarely if check input parameters to functions and methods - though I check each and every return code of course. Instead, I have strict conditions in the documentation for my functions, lots of unit tests, and in the actual code I just charge ahead as if everything is correct. There are programmer errors, of course, and you get an unexpected state at some point in the program - but that’s always true, and I’ve spent the time I would have spent checking variable values (BORING) in writing unit tests or proving (tiny) code segments correct.