Aleks Peshkov wrote:Sven Schüle wrote:Ad 1), unfortunately there are also "bad compilers" in this sense ...
I suspect that you did not explicitly turn on all warning levels. Microsoft C++ is rather verbose about signed/unsigned mismatch.
That might be the reason, although usually I have a high warning level, too (but not "-pedantic" for g++ or /W4 for MSVC++). And btw 0xffffffffffffffffLL (resp. 0xffffffffffffffffi64) is a correct literal, I think there is nothing wrong with it, and I do not know any compiler that emits a warning when this value is assigned to an unsigned 64 bit variable, or even worse, generates any bad code in this case. MSVC++ and g++ don't AFAIK. Same for 32 bit 0xffffffff (you do not need an 'u' at the end) or even 16 bit int.
Code: Select all
#include <stdio.h>
int main()
{
unsigned short us = 0xffff;
signed short ss = 0xffff;
unsigned int ui = 0xffffffff;
signed int si = 0xffffffff;
unsigned long long ull = 0xffffffffffffffffLL;
signed long long sll = 0xffffffffffffffffLL;
printf("us=%u ss=%d\n", us, ss);
printf("ui=%u si=%d\n", ui, si);
printf("ull=%llu sll=%lld\n", ull, sll);
return 0;
}
should compile pretty everywhere (***) without warnings (except older MSVC which needs "i64" suffix, "__int64" type and "I64d" format spec instead, as already stated - I just wanted to omit all these macros in the example above) and always print:
Code: Select all
us=65535 ss=-1
ui=4294967295 si=-1
ull=18446744073709551615 sll=-1
(***) Edit: Of course I know that the size of short/int/long long is not guaranteed to be 16/32/64 bit, so of course this is not a valid portable program. The point is a different one, however.
I vote for using unsigned literals and types everywhere instead of signed.
In my chess program I do not have any signed numbers, except in code comparing position evaluation scores. Intermediate evaluation gathered using a couple/quad of 16-bit unsigned registers.
I vote for using signed where appropriate, and unsigned where appropriate. And I also vote for writing code as simple as possible. Therefore I use unsigned integers for things that cannot go negative but signed integers for things that can.
Typical "unsigned" examples in my chess programs are square IDs (e.g. 0..127 with 0x88 board, 0..119 or 0..120 with 10x12 board, 0..63 with bitboards), piece types (0=no piece, 1=pawn, ..., 6=king), colors (e.g. 0=white, 1=black, 2=empty, 3=border), anything that counts the number of something, tells the size of something, denotes the index of something within an array or other container. Also evaluation weights, square distances, time values, ...
Typical "signed" examples are evaluation scores (for positional properties or for moves - in both cases it is most natural for me that a score can be negative) or square offsets that are added to or subtracted from a square ID to get another square ID. Here "unsigned" does not make any sense for me; of course it may make sense in other programs depending on the overall design and implementation, although I think that this might make such a program more complex.
There are at least two difficult areas where these simple and natural rules tend to make some trouble IMO. One is code where you do subtraction of unsigned integers for some reason, and the other is code where you use (system) library functions that deal with signed integers but your own code wants the corresponding variables to be unsigned. In both cases careful decisions must be taken, and I admit that I have never been successful in getting this point 100% perfectly satisfying in my code.
Sven