Hash table

Discussion of chess software programming and technical issues.

Moderators: hgm, Rebel, chrisw

User avatar
stegemma
Posts: 859
Joined: Mon Aug 10, 2009 10:05 pm
Location: Italy
Full name: Stefano Gemma

Re: Hash table

Post by stegemma »

If "exact" is bit 1 of "type" then you can avoid some conditional jump this way:

Code: Select all

   a1 += ~(h1->type - exact) & 1; 
   a2 += ~(h2->type - exact) & 1; 
   a3 += ~(h3->type - exact) & 1; 
   a4 += ~(h4->type - exact) & 1;
This works if the only type that is odd is "exact" and the others are even.
Author of Drago, Raffaela, Freccia, Satana, Sabrina.
http://www.linformatica.com
User avatar
hgm
Posts: 27787
Joined: Fri Mar 10, 2006 10:06 am
Location: Amsterdam
Full name: H G Muller

Re: Hash table

Post by hgm »

cdani wrote:Of the examples shown in this thread, I tried hgm approach but was maybe 2 elo worst.
But how did you test that? From the graph you can see the equi-distributed draft did 25% better than equidistributed only at an overload of a factor 100. How much was the overload factor in your Elo measurement?
User avatar
stegemma
Posts: 859
Joined: Mon Aug 10, 2009 10:05 pm
Location: Italy
Full name: Stefano Gemma

Re: Hash table

Post by stegemma »

Sorry, if exact is bit 0, it must be:

Code: Select all

a1 += (h1->type & 1);
...
Author of Drago, Raffaela, Freccia, Satana, Sabrina.
http://www.linformatica.com
User avatar
cdani
Posts: 2204
Joined: Sat Jan 18, 2014 10:24 am
Location: Andorra

Re: Hash table

Post by cdani »

hgm wrote:
cdani wrote:Of the examples shown in this thread, I tried hgm approach but was maybe 2 elo worst.
But how did you test that? From the graph you can see the equi-distributed draft did 25% better than equidistributed only at an overload of a factor 100. How much was the overload factor in your Elo measurement?
Copying exactly your code. The other things I have done is to initialize to 0 the little hashCount array and to treat the quiescence/static eval entries. Maybe I did not understood something? Thanks
User avatar
cdani
Posts: 2204
Joined: Sat Jan 18, 2014 10:24 am
Location: Andorra

Re: Hash table

Post by cdani »

stegemma wrote:Sorry, if exact is bit 0, it must be:

Code: Select all

a1 += (h1->type & 1);
...
Yes thanks. I have yet to optimize the whole function.
User avatar
hgm
Posts: 27787
Joined: Fri Mar 10, 2006 10:06 am
Location: Amsterdam
Full name: H G Muller

Re: Hash table

Post by hgm »

cdani wrote:Maybe I did not understood something? Thanks
Well, you did not understand my question.

I asked how large your hash table was (number of entries), and how large your typical search tree (time per move and nps).
User avatar
cdani
Posts: 2204
Joined: Sat Jan 18, 2014 10:24 am
Location: Andorra

Re: Hash table

Post by cdani »

hgm wrote:
cdani wrote:Maybe I did not understood something? Thanks
Well, you did not understand my question.

I asked how large your hash table was (number of entries), and how large your typical search tree (time per move and nps).
Ah!

128 MB / 16 bytes = 8,388,608 entries
The games where at 50 seconds + 0.05
The typical nps are 850,000 at opening phase, and 1,150,000 at endgame.
The typical time usage is maximum of 2 seconds in a few moves of opening phase, descending rapidly to less than one second, and 0.20 seconds in deep endgame.
User avatar
hgm
Posts: 27787
Joined: Fri Mar 10, 2006 10:06 am
Location: Amsterdam
Full name: H G Muller

Re: Hash table

Post by hgm »

Well, if I do the math it seems that your typical search tree measures at most about 1.7M nodes, and later in the game only 0.23M nodes. (I don't know if you count hash cutoffs in this). The hash table is 4 to 40 times larger.

So basically you are trying to measure the differences between replacement schemes under conditions where virtually no replacement takes place...
User avatar
cdani
Posts: 2204
Joined: Sat Jan 18, 2014 10:24 am
Location: Andorra

Re: Hash table

Post by cdani »

hgm wrote:Well, if I do the math it seems that your typical search tree measures at most about 1.7M nodes, and later in the game only 0.23M nodes. (I don't know if you count hash cutoffs in this). The hash table is 4 to 40 times larger.

So basically you are trying to measure the differences between replacement schemes under conditions where virtually no replacement takes place...
Mmmm... The worst attemps where clearly outside the error margins, so there was some big influence of the changes. Anyway I will try again the two or three best ideas with maybe 16 or 32 MB. Thanks!
User avatar
hgm
Posts: 27787
Joined: Fri Mar 10, 2006 10:06 am
Location: Amsterdam
Full name: H G Muller

Re: Hash table

Post by hgm »

Well, 1MB would be more suitable. The calculation above was for the total number of nodes. The number of different nodes in the tree (and thus the required number of hash entries) might be far smaller. Perhaps even as much as a factor 10. For one, each new iteration usually revisits the nodes of the previous iteration.