I had some troubles lately with the addSorted() of an Array. I inserted signed and unsigned values and it seems like they weren’t well sorted afterwards. Is there an issue with signed insertion / comparison using IntegerElementComparator?


Well, if you’re always using IntegerElementComparator, then I’d expect it to work ok… But if you were sometimes using IntegerElementComparator and sometimes IntegerElementComparator, then that’d clearly bugger up the sorting, because each one will treat the negative numbers differently.


I only used IntegerElementComparator.

Please try the following code:

[code]Array array;
IntegerElementComparator comparator;
Random rnd(0x4400ff0033221100);

for (int i=0; i<1000; i++)
array.addSorted(comparator, rnd.nextInt()&0xffffffff);[/code]
which generates 1000 random numbers between 0x00000000…0xffffffff and puts them into the array via addSorted. After checking the array, I found out that they were again not properly sorted. On my computer the array starts with 0x137dd191, goes up to 0xffa4351e followed by 0x000f99f5 and ending with 0x23368ae1.
I use Visual Studio 7.1.

If you change the &0xffffffff to &0x7fffffff, then it works ok, because then it is always a positive number.


Drat - looks like there’s an integer overflow when you have large ranges involved… Here’s a correction to the comparator:

template <class ElementType> class IntegerElementComparator { public: static int compareElements (const ElementType first, const ElementType second) throw() { return (first > second) - (second < first); } };

Although I can’t decide whether using

would actually be faster than

… Avoiding some jumps is good, but I guess it depends what the compiler has to do to get the bool into an integer form, which might actually involve more code than the explicit comparisons…


you cannot do return (first > second) - (second < first);
because you’re substracting 2 booleans which is undefined. on different systems they may have different internal representations.


I think any compiler will happily let you cast a bool to an int… (Maybe it’d need an explicit cast on some compilers, I’ve only tried it on MSVC so far). I’m just not sure whether the code it produces would be efficient or not.


I would take the 2nd method, but that’s just me :slight_smile:


just as an example (i did the example once with internal representation “true” of bool = 0xffffffff and once with internal representation of “true” = 0x1, both representations are valid since in C++ true is just non-0):



  • with internal bool true=ffffffff
    ffffffff-0 = -1

  • with internal bool true=1
    1-0 = 1


Ah yes, I’d forgotten about that option. Ok, I’ll put back the “safe” version!