Starting with Hash Tables.

Discussion of chess software programming and technical issues.

Moderators: hgm, Rebel, chrisw

User avatar
Luis Babboni
Posts: 464
Joined: Sat Feb 28, 2015 4:37 pm
Location: Argentina

Re: Starting with Hash Tables.

Post by Luis Babboni »

Sven Schüle wrote: ...
You simply keep all TT entries unless they are overwritten. If your TT has just one entry per index then you have no choice, you always overwrite. With more than one entry per index (typical is four) you define a replacement strategy for the case that all entries for the given index are in use (otherwise you use one of the free entries for that index). Only at that point, i.e. for the replacement strategy, the "age" becomes relevant.
....
OK. Thanks Sven!
I planed to have just one entry per index as first step, not four.
Could be too much worst?
Sven Schüle wrote: As I already told you in the past, I still suggest that you drop your "random" implementation on the move ordering level. You gain
1) reproducible behaviour of your search (important for debugging) and
2) better performance (since a random component in move ordering will often have a negative impact on tree size).
Avoiding to always repeat the same opening moves is only important if you play many games against the same opponent. Different opponents will usually play different opening moves at some point in time. Playing many games against the same opponent is the typical scenario when you test your own engine by playing many games with short time control. There you should use a more or less large set of starting positions, otherwise you would get misleading results. So even here you don't really need a random component in your search.
The only problem could be a less speed cause few calculus must do to traduce from random values to standard values.
1) I could turn random off to debugging.
2) The random is not in move ordering, just with moves of equal move ordering level.

Use different predetermined starting positions is quite a lie in actual level of my engine. It do not know nothing about openings yet.
Sven
Posts: 4052
Joined: Thu May 15, 2008 9:57 pm
Location: Berlin, Germany
Full name: Sven Schüle

Re: Starting with Hash Tables.

Post by Sven »

Luis Babboni wrote:
Sven Schüle wrote:...
You simply keep all TT entries unless they are overwritten. If your TT has just one entry per index then you have no choice, you always overwrite. With more than one entry per index (typical is four) you define a replacement strategy for the case that all entries for the given index are in use (otherwise you use one of the free entries for that index). Only at that point, i.e. for the replacement strategy, the "age" becomes relevant.
....
I planed to have just one entry per index as first step, not four.
Could be too much worst?
Sufficient in the beginning.
Luis Babboni wrote:
Sven Schüle wrote:As I already told you in the past, I still suggest that you drop your "random" implementation on the move ordering level. You gain
1) reproducible behaviour of your search (important for debugging) and
2) better performance (since a random component in move ordering will often have a negative impact on tree size).
Avoiding to always repeat the same opening moves is only important if you play many games against the same opponent. Different opponents will usually play different opening moves at some point in time. Playing many games against the same opponent is the typical scenario when you test your own engine by playing many games with short time control. There you should use a more or less large set of starting positions, otherwise you would get misleading results. So even here you don't really need a random component in your search.
The only problem could be a less speed cause few calculus must do to traduce from random values to standard values.
I don't understand this point, can you explain it further, please?
Luis Babboni wrote:1) I could turn random off to debugging.
You could. But then you would still leave it on while running tests, producing unstable results. Then you won't get any meaningful statement telling you whether a change that you made has actually worked or not.
Luis Babboni wrote:2) The random is not in move ordering, just with moves of equal move ordering level.
That *is* part of move ordering, or does it not influence the order in which moves are tried?
Luis Babboni wrote:Use different predetermined starting positions is quite a lie in actual level of my engine. It do not know nothing about openings yet.
The point is not opening knowledge but testing without duplicate games. It would as well be middlegame or endgame positions but then your tests would be biased since the game results would not show any weaknesses of the engine in earlier game phases.
User avatar
Luis Babboni
Posts: 464
Joined: Sat Feb 28, 2015 4:37 pm
Location: Argentina

Re: Starting with Hash Tables.

Post by Luis Babboni »

Sven Schüle wrote:
Luis Babboni wrote: The only problem could be a less speed cause few calculus must do to traduce from random values to standard values.
I don't understand this point, can you explain it further, please?
I have many lines in code saying things like (pseudocode):

Code: Select all

if InverseRandom(board(A2)) = pawn then...
instaed of simply:

Code: Select all

if board(A2) = pawn then...
cause board(A2) do not have a pawn ID number, have a random ID
Sven Schüle wrote:
Luis Babboni wrote:1) I could turn random off to debugging.
You could. But then you would still leave it on while running tests, producing unstable results. Then you won't get any meaningful statement telling you whether a change that you made has actually worked or not.
I use to make and test changes with random off. Just when I see it seems it works, I simply change a line like random=0 to random=1, make some more tests just in case and make public the version with random=1
Sven Schüle wrote:
Luis Babboni wrote:2) The random is not in move ordering, just with moves of equal move ordering level.
That *is* part of move ordering, or does it not influence the order in which moves are tried?
I have 20 levels of priority order for moves.
For example the first order is for the move: pawn capturing queen and promote queen.
But if I have a pawn in A7 and other pawn in C7 and is my turn and the opponent have queen in B8, to decide wich pawn will capture the queen sometimes will be the A7 pawn and sometimes the C7 pawn depends on wich of those two pawns have lower ID number and this ID number is determined by random.
Sven Schüle wrote:
Luis Babboni wrote:Use different predetermined starting positions is quite a lie in actual level of my engine. It do not know nothing about openings yet.
The point is not opening knowledge but testing without duplicate games. It would as well be middlegame or endgame positions but then your tests would be biased since the game results would not show any weaknesses of the engine in earlier game phases.
About duplicate games, as I said, that´s do not occurs cause the random in Soberango, but this is may be a maniac behavior by my side I admit.
I like to enjoy the road of the developement feeling how my engine play by itself without help. I just made some code lines to could be acepted by CCRL for 40/4 tests that allow it to make first moves as the tester dictates, but I feel it like a lie cause there is nothing in my engine code that make the engine do those inteligent moves without help at its actual development state. I think that wich determines that any opening is better than other for an enigne depends on the evaluation function of that engine and actually my engine´s evaluation function have nothing about position that is the main core of openings cause rarely there are captures in the first moves.

Well..... well.... all this I said defending random I think is too valid..... till last version..... but, but.... adding TT.... I think is the first time random could make the game weaker!
:?
I think that without random in any position, when the engine make the search, the probabilitie to find a possible position in the tree that is already in the TT, are higher than if make the search in differents orders cause the random. So how usefull the TT are be higher without random than with it!!! 8-)

I love the way Soberango plays different in the same situation cause random.... but actually I´m thinking that after add TT, may be is time to forgot that. :cry:
User avatar
Luis Babboni
Posts: 464
Joined: Sat Feb 28, 2015 4:37 pm
Location: Argentina

Re: Starting with Hash Tables.

Post by Luis Babboni »

If I´m understanding the things right, in the way I think I´ll do my TT (give 4 bytes for the position key and use 256MB for TT that is the rule for CCRL 40/4 rank), I have the option to choice between this two possibilities:

a) Have 25,165,824 entries with 1 "bucket" for each one.
b) Have 8,388,608 entries with 3 "buckets" for each one

The question is:

Is known wich posibilitie is better?
With wich possibilitie I´ll need to delete less amount of entries cause I have no place to store a new one?

Thanks.
Sven
Posts: 4052
Joined: Thu May 15, 2008 9:57 pm
Location: Berlin, Germany
Full name: Sven Schüle

Re: Starting with Hash Tables.

Post by Sven »

Luis Babboni wrote:If I´m understanding the things right, in the way I think I´ll do my TT (give 4 bytes for the position key and use 256MB for TT that is the rule for CCRL 40/4 rank), I have the option to choice between this two possibilities:

a) Have 25,165,824 entries with 1 "bucket" for each one.
b) Have 8,388,608 entries with 3 "buckets" for each one

The question is:

Is known wich posibilitie is better?
With wich possibilitie I´ll need to delete less amount of entries cause I have no place to store a new one?

Thanks.
Some remarks.

1) I am not sure whether it is common wording for everyone but for me a "bucket", in the context of a hash table in a chess engine, is the unit of information that is found by using the hash key (a part of it), and it can contain one or more hash table entries, depending on the overall hashing strategy. So I would swap the two words "bucket" and "entry" in your description above. "Bucket" is sometimes also called "cluster".

2) The total number of addressable buckets (according to #1) should be a power of two, since you want to use a certain number of bits from the hash key for your hash index. With 8,388,608 buckets that would be 23 index bits, the other number is not a power of two.

3) If I assume that you plan a hash table entry with a size of 10 bytes and you want to implement a strategy that uses more than one entry per index then you could either think about a slightly faster solution (considering cache issues already), or ignore that for a while. Ignoring would lead to defining "1 bucket = 3 entries = 3*10 bytes" while optimizing would mean "1 bucket = 3 entries + 2 padding bytes = 3*10+2 bytes" (so that two buckets fit exactly into one memory cache line).

4) Apart from these basic considerations, the answer to your question would clearly be "b". With "a" you will always store new data in the entry that is given by the index bits. With "b" you have three entries for one index. Either one or more of them are still unused, or otherwise you can decide which of these entries has the lowest benefit for you and can be overwritten (btw. not "deleted", just overwritten).
User avatar
Luis Babboni
Posts: 464
Joined: Sat Feb 28, 2015 4:37 pm
Location: Argentina

Re: Starting with Hash Tables.

Post by Luis Babboni »

Sven Schüle wrote:
Luis Babboni wrote:If I´m understanding the things right, in the way I think I´ll do my TT (give 4 bytes for the position key and use 256MB for TT that is the rule for CCRL 40/4 rank), I have the option to choice between this two possibilities:

a) Have 25,165,824 entries with 1 "bucket" for each one.
b) Have 8,388,608 entries with 3 "buckets" for each one

The question is:

Is known wich posibilitie is better?
With wich possibilitie I´ll need to delete less amount of entries cause I have no place to store a new one?

Thanks.
Some remarks.

1) I am not sure whether it is common wording for everyone but for me a "bucket", in the context of a hash table in a chess engine, is the unit of information that is found by using the hash key (a part of it), and it can contain one or more hash table entries, depending on the overall hashing strategy. So I would swap the two words "bucket" and "entry" in your description above. "Bucket" is sometimes also called "cluster".

2) The total number of addressable buckets (according to #1) should be a power of two, since you want to use a certain number of bits from the hash key for your hash index. With 8,388,608 buckets that would be 23 index bits, the other number is not a power of two.

3) If I assume that you plan a hash table entry with a size of 10 bytes and you want to implement a strategy that uses more than one entry per index then you could either think about a slightly faster solution (considering cache issues already), or ignore that for a while. Ignoring would lead to defining "1 bucket = 3 entries = 3*10 bytes" while optimizing would mean "1 bucket = 3 entries + 2 padding bytes = 3*10+2 bytes" (so that two buckets fit exactly into one memory cache line).

4) Apart from these basic considerations, the answer to your question would clearly be "b". With "a" you will always store new data in the entry that is given by the index bits. With "b" you have three entries for one index. Either one or more of them are still unused, or otherwise you can decide which of these entries has the lowest benefit for you and can be overwritten (btw. not "deleted", just overwritten).
Thanks Sven!
In fact I quoted "buckets" cause I not sure about the word, I took it from here: http://mediocrechess.blogspot.com.ar/20 ... ables.html (little below mid page talking about collisions).

I know nothing about cache work. :oops:
I simply think in use 10 bytes per TT entry and 3 entries ("buckets"/"custers") per index so I did this calculus: 10bytes/entrie x 3 entries / index x 8,388,608 indexes = 251,658,240 bytes = 240MB that gives me extra 16MB for all other engine memory stuff in the 256MB CCRL rules.
I did not know why the number of "buckets"/"clusters" must be a power of 2. For index I think in use the last 4 bytes of the number I use to identify a position.
The idea of use a total of 10 bytes per entrie is too bad not being a power of 2? I read something about that "padding" you said and I listened for first time in my live here: https://en.wikipedia.org/wiki/Data_structure_alignment
Sven
Posts: 4052
Joined: Thu May 15, 2008 9:57 pm
Location: Berlin, Germany
Full name: Sven Schüle

Re: Starting with Hash Tables.

Post by Sven »

Luis Babboni wrote:
Sven Schüle wrote:
Luis Babboni wrote:If I´m understanding the things right, in the way I think I´ll do my TT (give 4 bytes for the position key and use 256MB for TT that is the rule for CCRL 40/4 rank), I have the option to choice between this two possibilities:

a) Have 25,165,824 entries with 1 "bucket" for each one.
b) Have 8,388,608 entries with 3 "buckets" for each one

The question is:

Is known wich posibilitie is better?
With wich possibilitie I´ll need to delete less amount of entries cause I have no place to store a new one?

Thanks.
Some remarks.

1) I am not sure whether it is common wording for everyone but for me a "bucket", in the context of a hash table in a chess engine, is the unit of information that is found by using the hash key (a part of it), and it can contain one or more hash table entries, depending on the overall hashing strategy. So I would swap the two words "bucket" and "entry" in your description above. "Bucket" is sometimes also called "cluster".

2) The total number of addressable buckets (according to #1) should be a power of two, since you want to use a certain number of bits from the hash key for your hash index. With 8,388,608 buckets that would be 23 index bits, the other number is not a power of two.

3) If I assume that you plan a hash table entry with a size of 10 bytes and you want to implement a strategy that uses more than one entry per index then you could either think about a slightly faster solution (considering cache issues already), or ignore that for a while. Ignoring would lead to defining "1 bucket = 3 entries = 3*10 bytes" while optimizing would mean "1 bucket = 3 entries + 2 padding bytes = 3*10+2 bytes" (so that two buckets fit exactly into one memory cache line).

4) Apart from these basic considerations, the answer to your question would clearly be "b". With "a" you will always store new data in the entry that is given by the index bits. With "b" you have three entries for one index. Either one or more of them are still unused, or otherwise you can decide which of these entries has the lowest benefit for you and can be overwritten (btw. not "deleted", just overwritten).
Thanks Sven!
In fact I quoted "buckets" cause I not sure about the word, I took it from here: http://mediocrechess.blogspot.com.ar/20 ... ables.html (little below mid page talking about collisions).
[...]
I simply think in use 10 bytes per TT entry and 3 entries ("buckets"/"custers") per index so I did this calculus: 10bytes/entrie x 3 entries / index x 8,388,608 indexes = 251,658,240 bytes = 240MB that gives me extra 16MB for all other engine memory stuff in the 256MB CCRL rules.
I did not know why the number of "buckets"/"clusters" must be a power of 2. For index I think in use the last 4 bytes of the number I use to identify a position.
The wording may differ. What I call "buckets" is called differently by others, e.g. "cluster", it could also be called "group". Common language is that a transposition table (TT) consists of TT entries (or TT elements) which may or may not be grouped; in your case the plan is to have three entries per group. Let's use "bucket" for a group from now on. You access buckets through their table index which you obtain from the TT hash key. Since the complete TT hash key is too large to serve as the table index itself (typically the hash key has a size of 64 bit), and even four of those eight bytes are too large for that purpose, you need to define a mapping "TT hash key => table index". The typical, and most simple, mapping is to take the lowest N bits of the hash key as the index. And that is already the reason why your table should have a size of 2^N "buckets". It would also be possible to use an "arbitrary" size but then the mapping would be slightly more difficult (essentially it is always "hash key" modulo "table size"). 8,388,608 buckets means N=23, so you calculate the table index from the hash key by simply taking the lower 23 bits of it, and there you are. If you want to reserve the double size for your hash table then you increase N by one, and so on.

Now you have the index for a bucket consisting of, in your case, three TT entries. You need two operations, where each one starts by extracting the index from the hash key and then doing something with those three entries found there:

- "probe" => input is the hash key, the current depth, alpha and beta, and output is either a "matching" TT entry or the information that there is none; a TT entry is "matching" if the part of the hash key that you have stored in the entry is identical to the corresponding part of the hash key given as input, and if other conditions are matching as well (e.g. stored depth >= current depth, but there is more);

- "store" => input is again the hash key, the current depth, the score, the type of score (exact/lower bound/upper bound), and maybe more; the data will always be stored, either in an entry that was unused so far, or (usually) by overwriting an entry that previously contained data for a position for which its hash key has the same lower N index bits.

"probe" will simply loop over the three entries, compare the stored key part to the corresponding hash key part (to reduce the chance that the stored information actually belongs to a different position - but you can't perfectly exclude that), stop at the first matching entry, do the other comparisons for that entry (depth etc.), and return a result. "store" will loop over the entries as well, stop at the first entry that is unused (usually determined as "stored key part == 0"), and store the data there; or, if all three entries of the bucket are already in use, decide which one shall be replaced.
Luis Babboni wrote:The idea of use a total of 10 bytes per entrie is too bad not being a power of 2? I read something about that "padding" you said and I listened for first time in my live here: https://en.wikipedia.org/wiki/Data_structure_alignment
"Power of 2" is mainly for the number of table indices (2^N, in your case 2^23). If the number of bytes per table index is a power of 2 as well (e.g. 32) then of course the whole table size in bytes is a power of 2, but that is not a "hard requirement". It will only improve performance.

But you can forget about the "padding" for now, and simply have buckets of 3 entries = 3*10 = 30 bytes.
Milos
Posts: 4190
Joined: Wed Nov 25, 2009 1:47 am

Re: Starting with Hash Tables.

Post by Milos »

hgm wrote:Most computers that are around probably do not have more than 1GB RAM. Playing two engines against each other with 256MB each might already be too taxing.

Not everyone buys a new computer every year...
That is a bit of exaggeration in the times when memory price is below 5$/GB.
User avatar
Luis Babboni
Posts: 464
Joined: Sat Feb 28, 2015 4:37 pm
Location: Argentina

Re: Starting with Hash Tables.

Post by Luis Babboni »

Milos wrote:
hgm wrote:Most computers that are around probably do not have more than 1GB RAM. Playing two engines against each other with 256MB each might already be too taxing.

Not everyone buys a new computer every year...
That is a bit of exaggeration in the times when memory price is below 5$/GB.
Near twice that here in Argentina! :P
1 US$=16 AR$:
http://computacion.mercadolibre.com.ar/memorias/
User avatar
Luis Babboni
Posts: 464
Joined: Sat Feb 28, 2015 4:37 pm
Location: Argentina

Re: Starting with Hash Tables.

Post by Luis Babboni »

Thanks Sven, I need to do some tests to ask again what it is still not clear for me. :D