Searching fail highs shallower..

Discussion of chess software programming and technical issues.

Moderators: hgm, Rebel, chrisw

User avatar
silentshark
Posts: 327
Joined: Sat Mar 27, 2010 7:15 pm

Searching fail highs shallower..

Post by silentshark »

Spotted this in the posting for the new version of Ethereal

11.96 Resolve fail-highs with progressively shallower searches

What's the thinking behind this? Anyone else using this with success?
Dann Corbit
Posts: 12537
Joined: Wed Mar 08, 2006 8:57 pm
Location: Redmond, WA USA

Re: Searching fail highs shallower..

Post by Dann Corbit »

I think the idea comes from Stockfish.
When you see a fail high, often the subsequent searches are wider but more shallow.
Taking ideas is not a vice, it is a virtue. We have another word for this. It is called learning.
But sharing ideas is an even greater virtue. We have another word for this. It is called teaching.
User avatar
silentshark
Posts: 327
Joined: Sat Mar 27, 2010 7:15 pm

Re: Searching fail highs shallower..

Post by silentshark »

yeah, just looking, I think it's a way to increase average depth slightly, and that's where the ELO comes from.
jdart
Posts: 4366
Joined: Fri Mar 10, 2006 5:23 am
Location: http://www.arasanchess.org

Re: Searching fail highs shallower..

Post by jdart »

Note this technique is usually applied only at the root, although I suppose it could be done at interior nodes.

I have implemented this in Arasan and it was an ELO gainer. I also tried limiting the amount of reduction--right now it reduces on ply for every fail high iteration--but that gave inconsistent results, depending on time control and test conditions.
mar
Posts: 2554
Joined: Fri Nov 26, 2010 2:00 pm
Location: Czech Republic
Full name: Martin Sedlak

Re: Searching fail highs shallower..

Post by mar »

Dann Corbit wrote: Mon Mar 02, 2020 11:22 pm I think the idea comes from Stockfish.
When you see a fail high, often the subsequent searches are wider but more shallow.
Actually I think this idea comes from Houdart.
Martin Sedlak
User avatar
Ovyron
Posts: 4556
Joined: Tue Jul 03, 2007 4:30 am

Re: Searching fail highs shallower..

Post by Ovyron »

mar wrote: Tue Mar 03, 2020 3:21 am
Dann Corbit wrote: Mon Mar 02, 2020 11:22 pm I think the idea comes from Stockfish.
When you see a fail high, often the subsequent searches are wider but more shallow.
Actually I think this idea comes from Houdart.
Actually I think this idea comes from me (who knows, Houdart implemented it on Houdini after I mentioned it on Rybka Forum, he could have thought about it independently without reading my posts and maybe just a coincidence.)
bob
Posts: 20943
Joined: Mon Feb 27, 2006 7:30 pm
Location: Birmingham, AL

Re: Searching fail highs shallower..

Post by bob »

I think it has been around a LOT longer than that. I've tried it myself (but without any improvement) but the idea for fail highs and fail lows at the root is the same. For a fail high, move ordering is broken, so starting an iteration (or two or three) shallower should speed up the search by improving move ordering. But you had BETTER keep the helpful hash information so that you don't get into a cycle.

The opposite side is what do you do when you fail LOW on the first root move? You can continue searching, and commonly all moves fail low so you wasted time and still have to lower alpha and re-search everything. John Stanback (years ago) suggested that when you fail low on the first move, back up a few plies and start over. The hash table should make the first move fail low at each iteration, thanks to its draft, so the search should zero in on another move a little quicker (the point of iterative deepening.) Stanback might have been the first person I heard mention either of these but I am not 100% certain.
jstanback
Posts: 130
Joined: Fri Jun 17, 2016 4:14 pm
Location: Colorado, USA
Full name: John Stanback

Re: Searching fail highs shallower..

Post by jstanback »

I have never tried reducing the depth in the ID loop after a fail-high at the root. I guess I should try this.

For fail-low at the root I've tried lots of stuff. What I'm doing now is when the first move fails low, increase the window a few times to see if it's within about 1-2 pawns of the score from the previous iteration. If it's worse than that I quit trying to resolve the score for the first move and search the remaining moves. If all moves fail low at the root, then in the ID loop I re-search that iteration, but this time I try to resolve the score/pv for the first move (if there is enough time).

John
bob
Posts: 20943
Joined: Mon Feb 27, 2006 7:30 pm
Location: Birmingham, AL

Re: Searching fail highs shallower..

Post by bob »

I was really thinking about the fail-low side. There was a long discussion back in the r.g.c.c days where I thought you had mentioned the reduced-depth re-search. Didn't remember the details as I have always taken the approach to resolve the first move, period. I wanted this info so I had some idea of how much extra time I was willing to invest in looking for better moves. Helps to know how bad the first move actually is.
jstanback
Posts: 130
Joined: Fri Jun 17, 2016 4:14 pm
Location: Colorado, USA
Full name: John Stanback

Re: Searching fail highs shallower..

Post by jstanback »

I probably have tried doing a reduced-depth search on fail-low nodes. But, my current version just stops searching the first move once it's failed low at a score of about 1.5 pawns below the target. It then sets a "panic" time of 20-25% (I forget) of the time remaining on the clock and starts searching the other moves. If none of them are better than the unresolved score for the first move, then that iteration is re-searched and the score for the first move is resolved, probably with a bunch of additional fail lows. I did this because in some positions where there was a fail-low move which took a long time to resolve, there was another better move.