Is a querying the hash tables such a huge bottleneck?

Discussion of chess software programming and technical issues.

Moderators: hgm, Rebel, chrisw

rbarreira
Posts: 900
Joined: Tue Apr 27, 2010 3:48 pm

Re: Is a querying the hash tables such a huge bottleneck?

Post by rbarreira »

bob wrote:
rbarreira wrote:
bob wrote:
wgarvin wrote:
hgm wrote:I don't think you can say that AMD64 architecture has a most natural size for integers. They support 32-bit and 64-bit on equal footing. You can indicate your preference by setting the data-length bit in the segment descriptors, not? In 64-bit mode ("long mode", so not "compatibility mode"!), the default address size is 64 bit, but the default data size is 32 bit.
Yes, the operating system makes that choice for you though. User mode code doesn't usually mess with segment descriptors. Its a pretty fundamental choice they had to make, because they can not easily change it going forward (unless they add a new "mode" for code segments everywhere in their loader and thunking and DLL imports and dozens of similar things, and then continue to support both the old kind and the new kind of code segment, forever).

FWIW I think Microsoft made the right decision, because the transition from 16 to 32 bit was an obvious case of "16 bits isn't big enough" but I don't think the same thing is true at all of the transition from 32 to 64. Yes there are plenty of cases where having 64 bits is useful, but I venture a guess that *most* data in *most* programs is easy to represent in 32 bits. Making 64 the default for everything would seem to me to be overkill.

Its easy to forget that most workloads are not like bitboards :lol: After all, when you're just writing some code to do some minor task, how often do you actually use 64-bit types in it? I haven't done that very often, because I find that 32 bits is usually enough. I would use 64 bits for file sizes or offsets, because files bigger than 4 GB are not uncommon nowadays. I wouldn't use 64 bits for a "number of files" counter though -- when was the last time you did a batch processing of over four billion files at once?
I can certainly say that I blow the 4 billion unsigned counter limit all the time...

But in any case, we have always had "ints" and "longs" Does it really make sense to treat them as the same thing, when the hardware actually has support for 64 bit instructions and has 64 bit registers?
It's more a case of maintaing portability with old windows code than a case of trying to make sense.

Regardless, long long or uint64_t do the job just fine and it's standard under C99, so it's not a big deal.
There are zillions of C compilers in use that don't include C99. That makes it a _really_ big deal.
They may not support the whole C99 standard like variable-length arrays and such, but is there any popular compiler that doesn't support those types I mentioned?

edit - it seems that MS took until VS 2010 to support stdint.h, but they do support it now.
bob
Posts: 20943
Joined: Mon Feb 27, 2006 7:30 pm
Location: Birmingham, AL

Re: Is a querying the hash tables such a huge bottleneck?

Post by bob »

rbarreira wrote:
bob wrote:
rbarreira wrote:
bob wrote:
wgarvin wrote:
hgm wrote:I don't think you can say that AMD64 architecture has a most natural size for integers. They support 32-bit and 64-bit on equal footing. You can indicate your preference by setting the data-length bit in the segment descriptors, not? In 64-bit mode ("long mode", so not "compatibility mode"!), the default address size is 64 bit, but the default data size is 32 bit.
Yes, the operating system makes that choice for you though. User mode code doesn't usually mess with segment descriptors. Its a pretty fundamental choice they had to make, because they can not easily change it going forward (unless they add a new "mode" for code segments everywhere in their loader and thunking and DLL imports and dozens of similar things, and then continue to support both the old kind and the new kind of code segment, forever).

FWIW I think Microsoft made the right decision, because the transition from 16 to 32 bit was an obvious case of "16 bits isn't big enough" but I don't think the same thing is true at all of the transition from 32 to 64. Yes there are plenty of cases where having 64 bits is useful, but I venture a guess that *most* data in *most* programs is easy to represent in 32 bits. Making 64 the default for everything would seem to me to be overkill.

Its easy to forget that most workloads are not like bitboards :lol: After all, when you're just writing some code to do some minor task, how often do you actually use 64-bit types in it? I haven't done that very often, because I find that 32 bits is usually enough. I would use 64 bits for file sizes or offsets, because files bigger than 4 GB are not uncommon nowadays. I wouldn't use 64 bits for a "number of files" counter though -- when was the last time you did a batch processing of over four billion files at once?
I can certainly say that I blow the 4 billion unsigned counter limit all the time...

But in any case, we have always had "ints" and "longs" Does it really make sense to treat them as the same thing, when the hardware actually has support for 64 bit instructions and has 64 bit registers?
It's more a case of maintaing portability with old windows code than a case of trying to make sense.

Regardless, long long or uint64_t do the job just fine and it's standard under C99, so it's not a big deal.
There are zillions of C compilers in use that don't include C99. That makes it a _really_ big deal.
They may not support the whole C99 standard like variable-length arrays and such, but is there any popular compiler that doesn't support those types I mentioned?

edit - it seems that MS took until VS 2010 to support stdint.h, but they do support it now.
While one can find a version of most compilers that will support the stdint.h stuff, remember that there are lots of old compilers still in use... I could make the switch and see who yells, for example. :)