Page 1 of 1

Ridiculous QSearch Depth

Posted: Fri Mar 03, 2017 1:35 pm
by jorose
Today I added selective depth to my UCI protocol implementation. The first thing I did after implementing it was to run go infinite from the start position and to my shock and horror the engine reached insane selective depths in the starting position, eg selective depth 36 with full depth 12:

info depth 12 seldepth 36 time 3290 nodes 7744582 score cp 18 pv d2d4 d7d5 c1f4 c8f5 e2e3

I added code to end search and print the variation as soon as the engine reached selective depth 36 and got the final position

[d]r7/pp6/5k2/7p/P7/8/1P5K/R7 w - - 0 19

Which I then tracked back to the following position which initiated Q-search, though Q-search technically might have started up to 2 plies earlier, especially as I don't think this was part of the PV.

[d]r2qkbnr/ppB1p1p1/2n2p2/3p3p/P2P4/2N2P2/1Pb1P1PP/R2QKBNR w KQkq - 0 7

When searching SF to depth 12 the selective depth is only 17. How do strong engines avoid such long Q-search variations? Perhaps such long lines are rare enough that it doesn't matter anyways? Am I doing something fundamentally wrong?

Regardless I thought some of you might find this interesting.

Re: Ridiculous QSearch Depth

Posted: Fri Mar 03, 2017 2:41 pm
by elpapa
Is qsearch normally included in seldepth? In my engine I check for maximum depth (or ply rather) at the horizon.

Re: Ridiculous QSearch Depth

Posted: Fri Mar 03, 2017 3:26 pm
by jorose
Hmm, interesting point. I honestly hadn't considered that you could define it as the maximum depth without quiescence search. I couldn't find it on the CPW and I don't understand or know SFs code well enough to know find how they defined it, but your definition seems logical to me.

Since my post I have improved my move ordering slightly (I was using captured piece value - capturing piece value instead of MVV - LVA) which helped a bit but it is still ridiculous.

Re: Ridiculous QSearch Depth

Posted: Fri Mar 03, 2017 4:36 pm
by Maarten Claessens
When I let WaDuuttie do a Q-search in the given position, he shows:

Code: Select all

WaDuuttie> sb r2qkbnr/ppB1p1p1/2n2p2/3p3p/P2P4/2N2P2/1Pb1P1PP/R2QKBNR w KQkq - 0 7

WaDuuttie> quiesce
c7d8 36 (21)
d1c2 208 (6)
c3d5 bad
Quiesce: 208 / 6 (28 nodes)

WaDuuttie>
The Q-search searches only 28 nodes to a maximum depth of 6. The score of Bxd8 is 36, Qxc2 is better with a score of 208, and Nxd5 is not investigated.

Re: Ridiculous QSearch Depth

Posted: Fri Mar 03, 2017 6:39 pm
by cdani
In Andscacs seldepth includes quiescence.

When I generate captures in quiescence, I generate only recaptures from ply -5, unless there is check. This cuts a lot.

Re: Ridiculous QSearch Depth

Posted: Fri Mar 03, 2017 7:03 pm
by jdart
A properly implemented qsearch will do "stand-pat" pruning - it will cut off many notes w/o search because the score is already good enough it does not have to look for captures that might improve it. That and futility pruning cut out many nodes and limit the q-search.

--Jon

Re: Ridiculous QSearch Depth

Posted: Sat Mar 04, 2017 2:48 am
by mjlef
jorose wrote:Today I added selective depth to my UCI protocol implementation. The first thing I did after implementing it was to run go infinite from the start position and to my shock and horror the engine reached insane selective depths in the starting position, eg selective depth 36 with full depth 12:

info depth 12 seldepth 36 time 3290 nodes 7744582 score cp 18 pv d2d4 d7d5 c1f4 c8f5 e2e3

I added code to end search and print the variation as soon as the engine reached selective depth 36 and got the final position

[d]r7/pp6/5k2/7p/P7/8/1P5K/R7 w - - 0 19

Which I then tracked back to the following position which initiated Q-search, though Q-search technically might have started up to 2 plies earlier, especially as I don't think this was part of the PV.

[d]r2qkbnr/ppB1p1p1/2n2p2/3p3p/P2P4/2N2P2/1Pb1P1PP/R2QKBNR w KQkq - 0 7

When searching SF to depth 12 the selective depth is only 17. How do strong engines avoid such long Q-search variations? Perhaps such long lines are rare enough that it doesn't matter anyways? Am I doing something fundamentally wrong?

Regardless I thought some of you might find this interesting.
Typical tricks in search to limit it are:

MVV - LVA ordering
Toss "SEE losing" capture
if the static eval is well under alpha + estimated value of piece to be captured, toss the move (best to do this before the SEE test since that is expensive). This is a form of futility.
Limit search to recaptures only if depth < X
Make sure any "hash table move" you might use fits the conditions you are using at a given depth (example, if you are only including captures at depth<Y, then do not use the hash move if it is a non-capture).
And there are more, but these are the commonly know ones.

Re: Ridiculous QSearch Depth

Posted: Sat Mar 04, 2017 8:21 am
by hgm
I am currently wrestling with QS in Tenjiku Shogi. (With some urgency, because I was so audacious to enter a bot that does not exist yet in a turn-based tourney with a thinking time of 120 days, and they just started my clock...) The problem here is that the board is large (16x16) and densely packed with pieces (60 pieces + 18 pawns per side), and that each side has 6 sliders that can jump over an arbitrary number of other pieces (moving as Q, 2x R and 3x B). So when these jump into the opponent's camp, they get some 15-25 captures each.

It seems very important to use a move sorting that avoids 'plunder raids', and normal MVV/LVA sorting is not effective enough. It seems essential that you prioritize immediate recapture of the plundering piece, even when your own plundering piece has more valuable targets. I suppose the rationalization is that you should somehow weight in the value of the threats a piece poses in the victim value, making it more attractive to capture a Pawn that attacks your Queen than taking an unprotected Rook.

Plunder raids are often an essential part of the PV in Tenjiku Shogi, however, where each side continues to gobble up the highest pieces they can get until one of them captures with check, which the lagging side then cannot match, because he has to evade first (by recapture of the plundering piece), and his own plunderer will then be recaptured before it could make the equalizing capture. Often such a raid is the only way to get above beta, and it would be quite wasteful to search any other captures first in all nodes along the plunder branch.

Plunder raids are often initiated by making a counter threat rather than dealing with a threat against you; the best play is then usually to take as much as you can with the counter-threatened piece before cashing your original threat. But by capturing a protected piece the situation where both sides have a valuable and powerful piece under attack persists, so the opponent will then try a kamikaze move too, and you have your plunder raid.

I guess a cheap heuristic (for Tenjiku Shogi) would be to prioritize capturing of pieces in your own camp. Based on the idea that these will probably pose many threats.

Re: Ridiculous QSearch Depth

Posted: Sat Mar 04, 2017 9:00 am
by Joerg Oster
jorose wrote:Hmm, interesting point. I honestly hadn't considered that you could define it as the maximum depth without quiescence search. I couldn't find it on the CPW and I don't understand or know SFs code well enough to know find how they defined it, but your definition seems logical to me.

Since my post I have improved my move ordering slightly (I was using captured piece value - capturing piece value instead of MVV - LVA) which helped a bit but it is still ridiculous.
AFAIK, this is not done in SF because for speed reason.
But when I added it into qsearch https://github.com/joergoster/Stockfish ... fd22e32ab3
I couldn't measure any noticeable slowdown.

OTOH, I put more emphasis on analyzing abilities, and a tiny slowdown of maybe 0.05% doesn't matter at all.