Hi Bob,
reading the 22.1 version of crafty, in HashStore() you have the code:
...
htable = trans_ref + ((int) word2 & hash_mask);
draft = (unsigned int) htable->prefer.word1 >> 17;
age = htable->prefer.word1 >> 61;
age = age && (age != shared->transposition_id);
is it me or draft is being incorrectly decoded?
It seems you are not masking the bits in draft.
Shouldn't it be:
draft = ((unsigned int) htable->prefer.word1 >> 17) & 32767;
The age looks correct since it is at the leftmost position so after the right shift it doesn't need masking.
regards,
Alvaro
crafty hash bug?
Moderator: Ras
-
- Posts: 363
- Joined: Thu Mar 16, 2006 7:39 pm
- Location: Portugal
- Full name: Alvaro Cardoso
-
- Posts: 838
- Joined: Thu Jul 05, 2007 5:03 pm
- Location: British Columbia, Canada
Re: crafty hash bug?
If "unsigned int" is a 32-bit type, then draft will be a 15-bit unsigned value. (And even if "unsigned int" is larger, if prefer.word1 is a 32-bit type then same thing).
<rant mode="on">
Of course "unsigned int" does not have to be any particular size, which is why I never use types like that anymore and prefer to use my own typedef that I define, such as "uint32" or something.
"unsigned int" may have known relationships with certain other types, but I find it tedious and error-prone to try and reason about how it will work in expressions where it's being used with one of those other types and on some platforms they are the same size but on some platforms they are different sizes. E.g. what happens if you convert an unsigned short to a signed int, add an unsigned int to it and then convert that to an unsigned long? I know that's a contrived example, but its exactly those kinds of things that I never want to have to think about when writing some code. Instead I want to think "I'm converting a uint16 to an int32..." where that ALWAYS means the same thing on any modern platform.
I believe this is one of those paradoxical cases where having compiler-specific ifdefs in one place (the "base types" header file) leads to more portable code... migrating to a new compiler just means tweaking that one header file to define the correct fixed-size types for this compiler, the correct FORCE_INLINE macro, etc. Said tweaking can even be sort of automated with the preprocessor and/or new C99 headers (e.g. using something like Posh.h, but that is overkill for most projects). The goal is, as much as possible, to keep that compiler- and platform-specific stuff out of the rest of the codebase.
With fixed-size types, I only have to use one "mental model" for reasoning about my code. This helps a lot when trying to convince yourself that what you just wrote is correct, or that you don't need to put in an extra "& 32767", or that a structure size and alignment will be the same on all reasonable platforms. The only time I would ever use non-fixed-size types such as "int" is as a loop counter variable that doesn't get mixed in arithmetic expressions with fixed-size types. And even then I usually don't bother.
When I use write expressions, I want their meaning to be obvious and unambiguous. When I declare structures, I want to know their size so I can reason about how much memory they are using up, etc. Using non-fixed-size types is a great way to introduce subtle bugs and portability problems. I think it is much safer and clearer to define your own set of known-size types and use those exclusively.
</rant>
<rant mode="on">
Of course "unsigned int" does not have to be any particular size, which is why I never use types like that anymore and prefer to use my own typedef that I define, such as "uint32" or something.
"unsigned int" may have known relationships with certain other types, but I find it tedious and error-prone to try and reason about how it will work in expressions where it's being used with one of those other types and on some platforms they are the same size but on some platforms they are different sizes. E.g. what happens if you convert an unsigned short to a signed int, add an unsigned int to it and then convert that to an unsigned long? I know that's a contrived example, but its exactly those kinds of things that I never want to have to think about when writing some code. Instead I want to think "I'm converting a uint16 to an int32..." where that ALWAYS means the same thing on any modern platform.
I believe this is one of those paradoxical cases where having compiler-specific ifdefs in one place (the "base types" header file) leads to more portable code... migrating to a new compiler just means tweaking that one header file to define the correct fixed-size types for this compiler, the correct FORCE_INLINE macro, etc. Said tweaking can even be sort of automated with the preprocessor and/or new C99 headers (e.g. using something like Posh.h, but that is overkill for most projects). The goal is, as much as possible, to keep that compiler- and platform-specific stuff out of the rest of the codebase.
With fixed-size types, I only have to use one "mental model" for reasoning about my code. This helps a lot when trying to convince yourself that what you just wrote is correct, or that you don't need to put in an extra "& 32767", or that a structure size and alignment will be the same on all reasonable platforms. The only time I would ever use non-fixed-size types such as "int" is as a loop counter variable that doesn't get mixed in arithmetic expressions with fixed-size types. And even then I usually don't bother.
When I use write expressions, I want their meaning to be obvious and unambiguous. When I declare structures, I want to know their size so I can reason about how much memory they are using up, etc. Using non-fixed-size types is a great way to introduce subtle bugs and portability problems. I think it is much safer and clearer to define your own set of known-size types and use those exclusively.
</rant>
-
- Posts: 20943
- Joined: Mon Feb 27, 2006 7:30 pm
- Location: Birmingham, AL
Re: crafty hash bug?
Cardoso wrote:Hi Bob,
reading the 22.1 version of crafty, in HashStore() you have the code:
...
htable = trans_ref + ((int) word2 & hash_mask);
draft = (unsigned int) htable->prefer.word1 >> 17;
age = htable->prefer.word1 >> 61;
age = age && (age != shared->transposition_id);
is it me or draft is being incorrectly decoded?
It seems you are not masking the bits in draft.
Shouldn't it be:
draft = ((unsigned int) htable->prefer.word1 >> 17) & 32767;
The age looks correct since it is at the leftmost position so after the right shift it doesn't need masking.
regards,
Alvaro
remember that word1 is converted to an unsigned 32 bit value by the cast... so shifting it right 17 bits only leaves 15 bits left making the AND operation unnecessary... It is always possible that one day an "unsigned int" will be more than 32 bits, that is the danger of doing such bit twiddling directly...
-
- Posts: 363
- Joined: Thu Mar 16, 2006 7:39 pm
- Location: Portugal
- Full name: Alvaro Cardoso
-
- Posts: 363
- Joined: Thu Mar 16, 2006 7:39 pm
- Location: Portugal
- Full name: Alvaro Cardoso
Re: crafty hash bug?
Thanks Wylie, I wonder if you could post one of your definition headers, so I could take a peek on it and have a model to follow on.
best regards,
Alvaro
best regards,
Alvaro
-
- Posts: 838
- Joined: Thu Jul 05, 2007 5:03 pm
- Location: British Columbia, Canada
Re: crafty hash bug?
It's nothing fancy. I usually start with something like this:
Then if it needs to be ported to more compilers, you can probably just go to this one header file and add an #ifdef for that compiler and put the types and macros you want to use for that compiler.
There are other tricks you can do. You could make a compile time assertion macro, and somewhere in this header put an inline function which does nothing except for compile-time asserting that sizeof(U8) == 1, sizeof(U16) == 2, sizeof(U32) == 4, sizeof(U64) == 8, etc. Then you could call that inline function from main (but it will get compiled out to nothing... it will just make sure the code doesn't compile if the sizes are wrong). Then you could pick one of the #ifdef cases above and make it the default, and be confident that if someone tried to compile it on a compiler whose version of "int" was not the expected size, they would get an error while compiling, and the error would tell them which type was the wrong size. They could then go to the definition of that type and fix it for that compiler.
One way to define a compile-time assertion macro can be found here, and there are several ways to make one which will give different errors which might be more or less easy to recognize, depending on your compiler.
Code: Select all
#ifndef BASICTYPES_H_INCLUDED
#define BASICTYPES_H_INCLUDED
typedef signed char S8;
typedef unsigned char U8;
typedef signed short S16;
typedef unsigned short U16;
typedef signed int S32;
typedef unsigned int U32;
typedef float F32;
typedef double F64;
#if defined(__GCC__)
typedef signed long long S64;
typedef unsigned long long U64;
#define FORCE_INLINE __inline__
#elif defined(__ICC__) || defined(MSCVER)
typedef signed __int64 S64;
typedef unsigned __int64 U64;
#define FORCE_INLINE __forceinline
#else
#error "Unknown compiler?"
#endif
#endif // BASICTYPES_H_INCLUDED
There are other tricks you can do. You could make a compile time assertion macro, and somewhere in this header put an inline function which does nothing except for compile-time asserting that sizeof(U8) == 1, sizeof(U16) == 2, sizeof(U32) == 4, sizeof(U64) == 8, etc. Then you could call that inline function from main (but it will get compiled out to nothing... it will just make sure the code doesn't compile if the sizes are wrong). Then you could pick one of the #ifdef cases above and make it the default, and be confident that if someone tried to compile it on a compiler whose version of "int" was not the expected size, they would get an error while compiling, and the error would tell them which type was the wrong size. They could then go to the definition of that type and fix it for that compiler.
One way to define a compile-time assertion macro can be found here, and there are several ways to make one which will give different errors which might be more or less easy to recognize, depending on your compiler.
-
- Posts: 397
- Joined: Sun Oct 29, 2006 4:38 am
- Location: Schenectady, NY
Re: crafty hash bug?
I've done something similar to that for years, but recently found out about the C99 stdint.h file, which handles fixed-size types in a portable manner.wgarvin wrote:It's nothing fancy. I usually start with something like this:Then if it needs to be ported to more compilers, you can probably just go to this one header file and add an #ifdef for that compiler and put the types and macros you want to use for that compiler.Code: Select all
#ifndef BASICTYPES_H_INCLUDED #define BASICTYPES_H_INCLUDED typedef signed char S8; typedef unsigned char U8; typedef signed short S16; typedef unsigned short U16; typedef signed int S32; typedef unsigned int U32; typedef float F32; typedef double F64; #if defined(__GCC__) typedef signed long long S64; typedef unsigned long long U64; #define FORCE_INLINE __inline__ #elif defined(__ICC__) || defined(MSCVER) typedef signed __int64 S64; typedef unsigned __int64 U64; #define FORCE_INLINE __forceinline #else #error "Unknown compiler?" #endif #endif // BASICTYPES_H_INCLUDED
There are other tricks you can do. You could make a compile time assertion macro, and somewhere in this header put an inline function which does nothing except for compile-time asserting that sizeof(U8) == 1, sizeof(U16) == 2, sizeof(U32) == 4, sizeof(U64) == 8, etc. Then you could call that inline function from main (but it will get compiled out to nothing... it will just make sure the code doesn't compile if the sizes are wrong). Then you could pick one of the #ifdef cases above and make it the default, and be confident that if someone tried to compile it on a compiler whose version of "int" was not the expected size, they would get an error while compiling, and the error would tell them which type was the wrong size. They could then go to the definition of that type and fix it for that compiler.
One way to define a compile-time assertion macro can be found here, and there are several ways to make one which will give different errors which might be more or less easy to recognize, depending on your compiler.
http://www.opengroup.org/onlinepubs/000 ... int.h.html
these types are declared:
int8_t
int16_t
int32_t
uint8_t
uint16_t
uint32_t
"If an implementation provides integer types with width 64 that meet these requirements, then the following types are required: int64_t uint64_t"
You will need the macros in inttypes.h in order to print them.
http://www.opengroup.org/onlinepubs/000 ... pes.h.html
the inttypes.h file includes stdint.h
Ron
-
- Posts: 838
- Joined: Thu Jul 05, 2007 5:03 pm
- Location: British Columbia, Canada
Re: crafty hash bug?
Sure. I prefer to define them myself, but you could always use #ifdef __C99__ to detect that its available, and use that as the default.Ron Murawski wrote:I've done something similar to that for years, but recently found out about the C99 stdint.h file, which handles fixed-size types in a portable manner.
http://www.opengroup.org/onlinepubs/000 ... int.h.html