## Searching using slow eval with tactical verification

Discussion of chess software programming and technical issues.

Moderators: hgm, Dann Corbit, Harvey Williamson

Forum rules
This textbox is used to restore diagrams posted with the [d] tag before the upgrade.
matthewlai
Posts: 793
Joined: Sun Aug 03, 2014 2:48 am
Location: London, UK
Contact:

### Searching using slow eval with tactical verification

Idea for engines with slow eval -

At depth = 0, instead of calling QSearch, start a normal search instead but with a very fast evaluation function (eg. material-only - see *1).

If the search returns a mate score, we can use that directly. Otherwise, we cannot use the score directly, but we can go down the PV returned by that search, and find the first position in the PV where the static eval (using the fast eval function) matches the returned score. Then call the slow eval on that same position, and return that eval.

The fast search should be very fast because the evaluation function is much faster, and also because it's not trying to figure out positional stuff, and when the eval only returns a few discrete values, search becomes much faster.

One difficulty of this is in how to pass alpha-beta bounds from the normal search to the tactical search, since those values would be based on the slow eval. Obviously the naive solution is to just relax bounds to +inf and -inf, but that's inefficient. A more efficient solution is to keep track of alpha and beta as pairs (Vfast(P) and Vslow(P)). When we update either alpha or beta in the main search, we can also call fast eval on the position that generated the new score, and always propagate scores as pairs. This shouldn't result in any slowdown, since material can be kept track of incrementally, so Vfast is essentially free.

*1: For this to be absolutely "correct", the fast evaluation function must return Vfast(P1) >= Vfast(P2), if Vslow(P1) > Vslow(P2), for all P1 and P2. The extra equal sign makes all the difference here.

In other words, we cannot have positions P1 and P2 where -
Vfast(P1) < Vfast(P2), but Vslow(P1) > Vslow(P2).

Intuitively, what this means is Vfast should return different values for two positions only if it's absolutely certain that one is better than the other. If in doubt, it should return equal values.

A material-only eval is one example of Vfast that should be correct almost all the time. Adding positional knowledge can actually make it less correct.
Disclosure: I work for DeepMind on the AlphaZero project, but everything I say here is personal opinion and does not reflect the views of DeepMind / Alphabet.

Steve Maughan
Posts: 1091
Joined: Wed Mar 08, 2006 7:28 pm
Location: Florida, USA
Contact:

### Re: Searching using slow eval with tactical verification

Hi Matthew,

It's an interesting idea. For the initial bounds of the fast search you could evaluate the current position (say phi = eval_of_current_position). Then search alpha = phi, beta = phi + 1. You'd "stand-pat" if the material only search returns phi. Otherwise you'd do a full quiescent search.

Steve
http://www.chessprogramming.net - Maverick Chess Engine

tpetzke
Posts: 686
Joined: Thu Mar 03, 2011 3:57 pm
Location: Germany
Contact:

### Re: Searching using slow eval with tactical verification

Hi Matthew,

qSearch has a natural end when there are no more winning captures to be played. When you use a normal search how deep to you want it to search?
How do you ensure that the position you stop with in your fast search is quiet?

Wouldn't the engine not always return a PV for the fast search that has the first found sequence of moves that do not lose material for both sides (in the normal case where there is no forced material win). This might not be a smart sequence and the leaf position reached is far below in quality of what could be reached by a better sequence of not losing moves.

Positional factors can easily go above the value of pieces (advanced passed pawns or king safety) so I think it makes the engine blinder to positional threats.

Thomas...
Thomas...

=======
http://macechess.blogspot.com - iCE Chess Engine

matthewlai
Posts: 793
Joined: Sun Aug 03, 2014 2:48 am
Location: London, UK
Contact:

### Re: Searching using slow eval with tactical verification

tpetzke wrote:Hi Matthew,

qSearch has a natural end when there are no more winning captures to be played. When you use a normal search how deep to you want it to search?
How do you ensure that the position you stop with in your fast search is quiet?

Wouldn't the engine not always return a PV for the fast search that has the first found sequence of moves that do not lose material for both sides (in the normal case where there is no forced material win). This might not be a smart sequence and the leaf position reached is far below in quality of what could be reached by a better sequence of not losing moves.

Positional factors can easily go above the value of pieces (advanced passed pawns or king safety) so I think it makes the engine blinder to positional threats.

Thomas...
Sorry I should have made it clearer. The tactical search would have a regular QSearch at the end.

Yes, the idea is that in most cases it would return a PV that's even. This is to look for cases where there is a forced material win. That's why it's called tactical verification - to make sure we don't stop in a position with forced material win for either side.

It's true that the PV may not be a smart sequence. That's why we take the earliest position in PV where the eval matches the return value of the search. For positions with no forced material win (that is most positions), we will just use the root of the tactical search, and not go into the PV at all. For positions with a forced material win, we will evaluate the position right after the material win.
Disclosure: I work for DeepMind on the AlphaZero project, but everything I say here is personal opinion and does not reflect the views of DeepMind / Alphabet.

matthewlai
Posts: 793
Joined: Sun Aug 03, 2014 2:48 am
Location: London, UK
Contact:

### Re: Searching using slow eval with tactical verification

Steve Maughan wrote:Hi Matthew,

It's an interesting idea. For the initial bounds of the fast search you could evaluate the current position (say phi = eval_of_current_position). Then search alpha = phi, beta = phi + 1. You'd "stand-pat" if the material only search returns phi. Otherwise you'd do a full quiescent search.

Steve
That is a nice and clean idea! I'll think about it some more, but my gut feeling is that that will work.
Disclosure: I work for DeepMind on the AlphaZero project, but everything I say here is personal opinion and does not reflect the views of DeepMind / Alphabet.

matthewlai
Posts: 793
Joined: Sun Aug 03, 2014 2:48 am
Location: London, UK
Contact:

### Re: Searching using slow eval with tactical verification

Tried this in Giraffe. I was able to get it to work, but it's -200 Elo at fast games .

Guess I'll just have to make the neural nets faster!
Disclosure: I work for DeepMind on the AlphaZero project, but everything I say here is personal opinion and does not reflect the views of DeepMind / Alphabet.

mcostalba
Posts: 2684
Joined: Sat Jun 14, 2008 7:17 pm

### Re: Searching using slow eval with tactical verification

matthewlai wrote: *1: For this to be absolutely "correct", the fast evaluation function must return Vfast(P1) >= Vfast(P2), if Vslow(P1) > Vslow(P2), for all P1 and P2. The extra equal sign makes all the difference here.
If you find such Vfast() function please tell me, I will replace the current eval with that one altogether and get a stronger engine

Indeed the absolute value of evaluation is not important at all.

P.S: Regarding the extra equal sign, well a Vfast(P) = 0 satisfy the above relation

matthewlai
Posts: 793
Joined: Sun Aug 03, 2014 2:48 am
Location: London, UK
Contact:

### Re: Searching using slow eval with tactical verification

mcostalba wrote:
matthewlai wrote: *1: For this to be absolutely "correct", the fast evaluation function must return Vfast(P1) >= Vfast(P2), if Vslow(P1) > Vslow(P2), for all P1 and P2. The extra equal sign makes all the difference here.
If you find such Vfast() function please tell me, I will replace the current eval with that one altogether and get a stronger engine

Indeed the absolute value of evaluation is not important at all.

P.S: Regarding the extra equal sign, well a Vfast(P) = 0 satisfy the above relation
A discretization of Vslow() would give you a Vfast satisfying that relation .

Vfast(P) = round(Vslow(P), 100)

But the difficulty is in finding a way to evaluate that faster than Vslow, and with enough "bins" to make running it worthwhile. It won't be stronger if you replace the current eval with it, though, because it has lower resolution.

Vfast(P) does satisfy the relation, and it would be "correct" in the sense that the search will still give the correct output with respect to Vslow(P). But in this case the search degenerates to the original search (because the root of each tactical verification search will have the same Vfast() as the end of PV), and we would just be wasting time running the additional search.
Disclosure: I work for DeepMind on the AlphaZero project, but everything I say here is personal opinion and does not reflect the views of DeepMind / Alphabet.

mcostalba
Posts: 2684
Joined: Sat Jun 14, 2008 7:17 pm

### Re: Searching using slow eval with tactical verification

matthewlai wrote: A discretization of Vslow() would give you a Vfast satisfying that relation .

Vfast(P) = round(Vslow(P), 100)
A discretization of Vslow() is not Vfast() but VslowRounded()

matthewlai
Posts: 793
Joined: Sun Aug 03, 2014 2:48 am
Location: London, UK
Contact:

### Re: Searching using slow eval with tactical verification

mcostalba wrote:
matthewlai wrote: A discretization of Vslow() would give you a Vfast satisfying that relation .

Vfast(P) = round(Vslow(P), 100)
A discretization of Vslow() is not Vfast() but VslowRounded()
One possible way to get a useful Vfast is to take the normal eval, reorder it so that larger terms come before smaller terms. This would imply weird things like doing some positional evaluation before pawn material.

Then you can cut it off at any point where all the remaining smaller terms can't add up to be larger than any of the bigger terms.

But I can't do that because I have no idea how my eval() works. It's just a bunch of matrices.
Disclosure: I work for DeepMind on the AlphaZero project, but everything I say here is personal opinion and does not reflect the views of DeepMind / Alphabet.