Stockfish search

Discussion of chess software programming and technical issues.

Moderators: bob, hgm, Harvey Williamson

Forum rules
This textbox is used to restore diagrams posted with the [d] tag before the upgrade.
Tord Romstad
Posts: 1808
Joined: Wed Mar 08, 2006 8:19 pm
Location: Oslo, Norway

Re: Stockfish search

Post by Tord Romstad » Thu Oct 31, 2013 10:19 am

IQ wrote:In this case I tend to agree with bob and I don't quite buy the "easier to read and reason about the code" part. In this case this would be a completely modular piece of code with minimal impact on other parts. With a well placed comment it should not confuse anybody, not even amateurs.
You underestimate the stupidity of people like myself. :)

It's not mainly about confusion, but about noise. Even if the code for managing a PV array is easy to understand, it still takes up space in my editor window or my source code printouts. With a well placed comment, it takes up even more space. Reducing the amount of large scale program logic I can see without scrolling increases the mental effort of understanding the code.
Especially as the benefits for debugging totally outweigh the cost. This in itself is a huge "simplification", just of another kind. And it also enhances readability of the programs output - another form of readability and enhanced functionality for analysis.
As I said, it's a tradeoff. I agree that there is benefits and a cost, but to me, the cost marginally outweighs the benefits in this case. I've very rarely found the PV useful for debugging except in the very early phases of development (when I did) use a PV array), and as a user, the last few moves of the PV are rubbish anyway, regardless of how it is built. The initial portion of the PV, which is the interesting part, will almost always be identical.

User avatar
lucasart
Posts: 3044
Joined: Mon May 31, 2010 11:29 am
Full name: lucasart
Contact:

Re: Stockfish search

Post by lucasart » Thu Oct 31, 2013 10:21 am

mcostalba wrote:
IQ wrote: In this case the old proverb might apply: "As simple as possible, but as complex as needed"
It is not with proverbs that SF development will be influenced.

I reply to you but is a reply for also many other posts. As an open sorce project everybody is free to submit his patches for review.

OTH this has consequences because argumentations like:

- feature requests
- wishes
- I think that...
- If you don't do this SF will remain a crap
- If you don't do this you are a bunch of morons
- etc.

Will have (near) zero probability to be evaluated. So you may think that this moves the bias in the SF development to favour developers willing to put their typing fingers where mouth is...yes you are right. It is exactly this.

Code talks words walk.
+1

"Talk is cheap. Show me the code." -- Linus Torvalds.
Theory and practice sometimes clash. And when that happens, theory loses. Every single time.

Tord Romstad
Posts: 1808
Joined: Wed Mar 08, 2006 8:19 pm
Location: Oslo, Norway

Re: Stockfish search

Post by Tord Romstad » Thu Oct 31, 2013 10:21 am

hgm wrote:Whether it is better for Stockfish to get PV from hash or from a tri-angular array is an interesting discussion, but it does not resolve my astonishment. Even if Stockfish extracts the PV from the hash table, how can it extract a move from a node that failed low? Does Stockfish also store hash moves in upper-bound entries? If so, how does it decide which move to put there? The one with the highest upper bound?
SF does not store hash moves at fail low nodes, but if there already is a hash move for this position (from a previous search where we got a score above alpha), that move is kept.

bob
Posts: 20641
Joined: Mon Feb 27, 2006 6:30 pm
Location: Birmingham, AL

Re: Stockfish search

Post by bob » Thu Oct 31, 2013 3:42 pm

Tord Romstad wrote:
Milos wrote:Returning PV from hash is pretty embarassing for top engine pretending to be useful for analysis.
Hardly unique for Stockfish, though. I know some (all?) Rybka versions also build PVs from the transposition table, and I'm sure there are others. Most users put way too much emphasis on the PV, and particularly the final position of the PV anyway. The last few moves of a PV (the ones where a real PV and a transposition table PV differ) are very low quality, and the position at the end of the PV will never appear on the board. And it's just a single line out of a giant search tree, and except in exceptional position with a clear forced win, you can't understand the analysis without playing out the suggested moves and experimenting with alternatives along the way.
Thanks to this and rediculously scaled scores
I've never understood this argument. There is no God-given evaluation scale, and the only objectively correct evaluation function is a function that returns -infinity, 0 or +infinity, depending on whether the position on the board is lost, drawn or won. In practice, it usually isn't possible to determine the game theoretical value of a position, so the evaluation function just assigns higher numbers to positions where we believe we have a better chance of scoring a good result. The scale is completely arbitrary, as long as it's reasonably internally consistent.

The numbers that are printed in the analysis are different from the internal scores, and they are, as you say, scaled. The UCI protocol asks for the score to be displayed as "centipawns", so we divide the internal score by the base value of a middle game pawn before sending it to the user interface. Dividing it by some arbitrary other constant because some users like to see bigger or smaller numbers makes no sense. Anyone is free to mentally multiply the scores by whatever number they want anyway.
(and couple of other things like search rediculosuly prunning and reaching depth of 100 in relatively standard endings) SF will never be accepted as a serious analysis tool.
Actually, there are top 10 GMs who consider SF very valuable for analysis.
I agree that the quality of moves in the PV diminishes as you get further from the root. But it is WAY nice to be able to see the score, which has to be right, and the PV that leads to the node where that score was actually produced, so that you can look at the root position, the correct path, and determine what was overlooked/mis-evaluated/over-extended/etc. I've found that this helps me fairly frequently, because I can see exactly where the search went to pull that score out of the ether, and I can step down the PV to see what was missed and why. With these random/incomplete PVs, it is harder. That's one reason why I jumped on the short PV due to hash hits problem, because that was hiding too much information that was useful for debugging.

And the idea of not allowing exact hash hits along the PV was simply something I would NEVER consider, as a solution to fixing the hash hit short PV problem...

IQ
Posts: 161
Joined: Thu Dec 17, 2009 9:46 am

Re: Stockfish search

Post by IQ » Thu Oct 31, 2013 9:03 pm

I was under the impression that although "the last few moves of the PV are rubbish" it leads exactly to the position from which the eval score was brackpropagated. That seems pretty valuable. I understand the tradeoffs you are talking about, even though i do not share you weighing of them. One thing is certain though: You need an editor with code folding! :)
Tord Romstad wrote:
IQ wrote:In this case I tend to agree with bob and I don't quite buy the "easier to read and reason about the code" part. In this case this would be a completely modular piece of code with minimal impact on other parts. With a well placed comment it should not confuse anybody, not even amateurs.
You underestimate the stupidity of people like myself. :)

It's not mainly about confusion, but about noise. Even if the code for managing a PV array is easy to understand, it still takes up space in my editor window or my source code printouts. With a well placed comment, it takes up even more space. Reducing the amount of large scale program logic I can see without scrolling increases the mental effort of understanding the code.
Especially as the benefits for debugging totally outweigh the cost. This in itself is a huge "simplification", just of another kind. And it also enhances readability of the programs output - another form of readability and enhanced functionality for analysis.
As I said, it's a tradeoff. I agree that there is benefits and a cost, but to me, the cost marginally outweighs the benefits in this case. I've very rarely found the PV useful for debugging except in the very early phases of development (when I did) use a PV array), and as a user, the last few moves of the PV are rubbish anyway, regardless of how it is built. The initial portion of the PV, which is the interesting part, will almost always be identical.
:)

mar
Posts: 2010
Joined: Fri Nov 26, 2010 1:00 pm
Location: Czech Republic
Full name: Martin Sedlak

Re: Stockfish search

Post by mar » Thu Oct 31, 2013 9:18 pm

Well I don't think that using real PV is more complicated than extracting PV from hash table
as you don't have to check for legality (I always check hash moves for legality so YMMV).
The only drawback is that you have to copy sub PVs in PV nodes but I couldn't measure a regression.
Plus of course it costs some additional memory but that's negligible.
In fact I switched to triangular instead of hash when I saw some rare nonsensical checkmates at the end the PV.

jdart
Posts: 3837
Joined: Fri Mar 10, 2006 4:23 am
Location: http://www.arasanchess.org

Re: Stockfish search

Post by jdart » Mon Nov 04, 2013 1:54 pm

I store a move from IID if there is one, even at fail-low nodes. Effectively this is considered the hash move.

--Jon

bob
Posts: 20641
Joined: Mon Feb 27, 2006 6:30 pm
Location: Birmingham, AL

Re: Stockfish search

Post by bob » Sat Nov 09, 2013 5:12 pm

jdart wrote:I store a move from IID if there is one, even at fail-low nodes. Effectively this is considered the hash move.

--Jon
Why do an IID search at a fail-low node? By definition there is no recognizable best move there...

Post Reply