[d]r4r1k/2p3bp/6p1/8/1PQP2P1/2P5/1q2P1K1/r1BR3R b - - 1 36
(Yes there are three rooks, a pawn has just underpromoted to ROOK instead of QUEEN.... this is another issue I've seen many times, I'm still investigating...). Here my engine played Qc2 after a bunch of centiseconds at ply 11, and let the opponent play Rxh7+!. Every move except Qc2 seems to keep the advantage....... So I tried to understand why it played such move. If I let the engine analyze this position it discovers pretty quickly (depth 3) all the tactics behind:
Code: Select all
3 +4.14 1198 0:00.00 b2c2 h1h7 h8h7 d1h1 g7h6 c4c7 h7g8 c1h6 c2e2 g2g3 a1h1 h6f8 g8f8
I was playing a bit with LMR and then realized that once you have chosen an arbitrary reduction R value (say R=10 just to take a very big value) for LMR, your engine will be weaker because it will be blind to tactics (never reduce a capture btw...). This has a direct implication on the TT: every entry stored within R plies from horizon could be a weak entry if "properly" reduced. Since LMR is recursive, this is more relevant where the bad move lies in the rightmost part of the search tree.
So I'm supposing that Qc2 has been reduced so much in earlier searches that it entered directly qsearch, and when my engine hit this position it trusted a score from a "blind" TT. Is this possible?
What I really cannot explain is why when the engine hit this position at root, it still preferred to play such move instead of playing another move. Qc2 has a flaw (but it is behind the horizon and engine of course cannot see that), other moves have a better score (within the horizon), why the engine did not see that?