OK, here is the idea. Programs get scores from both board evaluations, and from searches stored in the hash table (often the hash table just has limits). I noticed that the score limit from the hash could often be used to correct the score estimate from the evaluation function..For example, if the evaluation returned say -10, but the hash table entry for this position is an upper limit and is say -20, then we can correct the current evaluation and make it -20, closer to the "truth".
How could this be useful? Well, in the qsearch, we assume (unless in check) that the side to move can stand pat and accept the current score. If this score is too high from the evaluation, then the search misses the fact that if it was forced to move, it would lose something. This should make the qsearch more immune to zugzwang situations. Also, programs often use the current evaluationin pruning decisions. Correcting the eval using the hash values should make these decisions more accurate, since they are returned fomr an actual search.
So, what do you think? Worth testing?
Mark
Correcting Evaluation with the hash table
Moderators: hgm, Rebel, chrisw
-
- Posts: 803
- Joined: Mon Jul 17, 2006 5:53 am
- Full name: Edsel Apostol
Re: Correcting Evaluation with the hash table
I'm currently doing this on the latest public TL. It seems to help but not much though. I have not tested this idea much so it would be great to have another opinion from other programmers. I also think that it might make the pruning more conservative. Please do let me know the results if you managed to test it yourself.mjlef wrote:OK, here is the idea. Programs get scores from both board evaluations, and from searches stored in the hash table (often the hash table just has limits). I noticed that the score limit from the hash could often be used to correct the score estimate from the evaluation function..For example, if the evaluation returned say -10, but the hash table entry for this position is an upper limit and is say -20, then we can correct the current evaluation and make it -20, closer to the "truth".
How could this be useful? Well, in the qsearch, we assume (unless in check) that the side to move can stand pat and accept the current score. If this score is too high from the evaluation, then the search misses the fact that if it was forced to move, it would lose something. This should make the qsearch more immune to zugzwang situations. Also, programs often use the current evaluationin pruning decisions. Correcting the eval using the hash values should make these decisions more accurate, since they are returned fomr an actual search.
So, what do you think? Worth testing?
Mark
Edsel Apostol
https://github.com/ed-apostol/InvictusChess
https://github.com/ed-apostol/InvictusChess
-
- Posts: 481
- Joined: Thu Apr 16, 2009 12:00 pm
- Location: Slovakia, EU
Re: Correcting Evaluation with the hash table
I'm doing this in Critter. But I'm not quite happy with it. It helps a little bit, but it causes some pretty bad instabilities in search.
A typical example: A move fails high on zero window search, but the widened window re-search does not see why it failed high, because the offending hash entry was in the meantime overwritten.
A typical example: A move fails high on zero window search, but the widened window re-search does not see why it failed high, because the offending hash entry was in the meantime overwritten.
-
- Posts: 481
- Joined: Thu Apr 16, 2009 12:00 pm
- Location: Slovakia, EU
Re: Correcting Evaluation with the hash table
Maybe this is not a problem for other engines, but Critter uses fail-soft with aspiration windows. It is easy to overshot the AW, and the search with widened bounds can take forever to finish - only to discover that the exact score was well insiede the original AW.
-
- Posts: 1154
- Joined: Fri Jun 23, 2006 5:18 am
Re: Correcting Evaluation with the hash table
This kind of thing can also happen with lazy eval too. All part of the fun.rvida wrote:I'm doing this in Critter. But I'm not quite happy with it. It helps a little bit, but it causes some pretty bad instabilities in search.
A typical example: A move fails high on zero window search, but the widened window re-search does not see why it failed high, because the offending hash entry was in the meantime overwritten.
-Sam
-
- Posts: 2684
- Joined: Sat Jun 14, 2008 9:17 pm
Re: Correcting Evaluation with the hash table
We are currently experimenting with this. Sometime it works, sometime doesn't. With futility it seems it does not work....but we are still investigating on it.mjlef wrote: So, what do you think? Worth testing?
-
- Posts: 1822
- Joined: Thu Mar 09, 2006 11:54 pm
- Location: The Netherlands
Re: Correcting Evaluation with the hash table
If i have information from hashtable it usually gives a cutoff in qsearch.mjlef wrote:OK, here is the idea. Programs get scores from both board evaluations, and from searches stored in the hash table (often the hash table just has limits). I noticed that the score limit from the hash could often be used to correct the score estimate from the evaluation function..For example, if the evaluation returned say -10, but the hash table entry for this position is an upper limit and is say -20, then we can correct the current evaluation and make it -20, closer to the "truth".
How could this be useful? Well, in the qsearch, we assume (unless in check) that the side to move can stand pat and accept the current score. If this score is too high from the evaluation, then the search misses the fact that if it was forced to move, it would lose something. This should make the qsearch more immune to zugzwang situations. Also, programs often use the current evaluationin pruning decisions. Correcting the eval using the hash values should make these decisions more accurate, since they are returned fomr an actual search.
So, what do you think? Worth testing?
Mark
Mind sharing rough pseudo code with us if you're not describing the usual hashtable cutoff?
If we write down things in vague conceptual words, of course things can get interpreted in a wide manner. In Amsterdam we have a lot of cafe's where there is people who are very professional in saying things in a vague manner, meanwhile smoking very BIG cigarettes.
With pseudo code that is tougher.
Thanks,
Vincent
-
- Posts: 344
- Joined: Wed Sep 23, 2009 5:56 pm
- Location: Germany
Re: Correcting Evaluation with the hash table
If I have understood this correctly:diep wrote:Mind sharing rough pseudo code with us if you're not describing the usual hashtable cutoff?
Code: Select all
int CorrectEvalScore(int eval, int ttValue, int ttBound)
{
if (bound == LOWERBOUND)
{
return max(eval, ttValue);
}
if (bound == UPPERBOUND)
{
return min(eval, ttValue);
}
return eval;
}
Code: Select all
// handle TT cut-offs
eval = CorrectEvalScore(Evaluate(), ttValue, ttBound);
-
- Posts: 481
- Joined: Thu Apr 16, 2009 12:00 pm
- Location: Slovakia, EU
Re: Correcting Evaluation with the hash table
He is talking about something like this:diep wrote:
If i have information from hashtable it usually gives a cutoff in qsearch.
Mind sharing rough pseudo code with us if you're not describing the usual hashtable cutoff?
Code: Select all
if (hash_entry->bound == EXACT)
stand_pat = hash_entry->score;
else {
stand_pat = position->eval();
if (hash_entry->bound == LOWER_BOUND)
stand_pat = max(stand_pat, hash_entry->score);
else
if (hash_entry->bound == UPPER_BOUND)
stand_pat = min(stand_pat, hash_entry->score);
}
if (stand_pat >= beta)
return stand_pat;
-
- Posts: 27796
- Joined: Fri Mar 10, 2006 10:06 am
- Location: Amsterdam
- Full name: H G Muller
Re: Correcting Evaluation with the hash table
I guess the proposed scheme would only resort any effect in the QS at the end of the PV branch. There the hash bound would not produce a cutoff as the window is still open, but the current eval could be in contradiction with the score bound from a deeper search.diep wrote:If i have information from hashtable it usually gives a cutoff in qsearch.
Rather than using the bound in that case, I would be inclined to force an extension. If it happens in so few leaf nodes, this can hardy be costly.