Cache over-writing and PV's

Discussion of chess software programming and technical issues.

Moderators: hgm, Rebel, chrisw

bob
Posts: 20943
Joined: Mon Feb 27, 2006 7:30 pm
Location: Birmingham, AL

Re: Cache over-writing and PV's

Post by bob »

mvk wrote:There is no consensus on this. See for example http://www.talkchess.com/forum/viewtopi ... 11&t=38408. A person skilled in the art will recognize when the qualifier "expected" is being omitted.
That definition, which came from Knuth/Moore's paper, is not very good. It assumes a perfectly ordered tree which doesn't happen in real engines. A more generic approach seems to be (for those using PVS) any node where beta - alpha is greater than 1. But even that doesn't really work all the time either.
bob
Posts: 20943
Joined: Mon Feb 27, 2006 7:30 pm
Location: Birmingham, AL

Re: Cache over-writing and PV's

Post by bob »

syzygy wrote:
mvk wrote:Onno's definition is precise and useful in the context of PVS enhanced alpha-beta.

If someone wants to say "Type-1 node by Knuth's definition in the context of plain alpha-beta", they can say that :-)
Onno's definition is indeed very precise and useful.

Knuth's definition of type 1/2/3 nodes obviously served Knuth's purpose, which was to prove something about the alpha-beta algorithm, just fine. But, unless one applies Onno's common sense approach, Knuth's definition is completely useless for describing an actual search algorithm. For starters, Knuth is only classifying the nodes of a minimal tree and chess programs can't assume they are only searching the minimal tree.

Marsland and Popowich, who introduced the terms PV, CUT and ALL nodes, are discussing an actual search algorithm, so are in fact applying common sense and talking about "expected" type 1/2/3 nodes. When they talk about "PV splitting" they don't mean to limit splitting to type 1 nodes of the minimal tree which is only known after the search is done.
Actually PV splitting meant EXACTLY that. Assuming you search left to right, PV split ONLY splits along the left-most branch of the tree. Which both assumes that move ordering is perfect AND it is based on the YBW concept since the first branch from each node down the left-hand side is searched sequentially, and the rest are searched in parallel. Several of us used this in the late 70's and early 80's, but it doesn't scale very well at all with > 4 processors...
bob
Posts: 20943
Joined: Mon Feb 27, 2006 7:30 pm
Location: Birmingham, AL

Re: Cache over-writing and PV's

Post by bob »

syzygy wrote:
bob wrote:
syzygy wrote:
bob wrote:The thread has digressed as most here do.
I replied directly to what the OP wrote. I quoted the relevant parts. If you reflexively reply without reading what you are replying to, that is not my problem.
Sorry, but I replied in a DIFFERENT part of the thread discussing EXACT hash hits at PV nodes and short PVs.
Stop the trolling nonsense. When you reply to my comments, then that's not a different part of the thread. If you don't know what you are replying to then just don't reply! You do not HAVE to reply to each and every comment that is posted.
Here is where _I_ came in:

Yes... That's also the case of the OP where the PV was badly truncated at the 4th half-move and the score says it's mate in 24 moves, isn't it? Please again...

It seems that you have time (and answers) to answer other posts, but you still miss to answer my simple question... Oh wait... You don't have an answer... Just because you cannot have one... Well I will live with that... And all other people of this forum will, that are reading those words and looking you in a such embrassing situation...

Bye bye...


There is a solution that has been in Crafty for quite a while now. It works well, has no measurable cost, which makes it the best of both worlds. I've written a short paper describing it but have not taken the time to actually finish it up. Probably should tackle that pretty soon.

Those that think bogus PVs are OK have their opinion. But you are correct in that it is useful to show a real PV, and NOT just because of the users. The "developer" can real take advantage of an accurate PV as well to help find bugs.
So feel free to "try again". Short PVs was what I addressed, after HGM had mentioned Crafty's path hash idea a couple of times... As always, you seem to show up just to argue, not to pass on actual useful information or anything...
syzygy
Posts: 5557
Joined: Tue Feb 28, 2012 11:56 pm

Re: Cache over-writing and PV's

Post by syzygy »

bob wrote:Here is where _I_ came in:
I never replied to _that_ post, so you are confused.

You should learn to read what you are replying to before reflexively replying.
syzygy
Posts: 5557
Joined: Tue Feb 28, 2012 11:56 pm

Re: Cache over-writing and PV's

Post by syzygy »

bob wrote:
syzygy wrote:
mvk wrote:Onno's definition is precise and useful in the context of PVS enhanced alpha-beta.

If someone wants to say "Type-1 node by Knuth's definition in the context of plain alpha-beta", they can say that :-)
Onno's definition is indeed very precise and useful.

Knuth's definition of type 1/2/3 nodes obviously served Knuth's purpose, which was to prove something about the alpha-beta algorithm, just fine. But, unless one applies Onno's common sense approach, Knuth's definition is completely useless for describing an actual search algorithm. For starters, Knuth is only classifying the nodes of a minimal tree and chess programs can't assume they are only searching the minimal tree.

Marsland and Popowich, who introduced the terms PV, CUT and ALL nodes, are discussing an actual search algorithm, so are in fact applying common sense and talking about "expected" type 1/2/3 nodes. When they talk about "PV splitting" they don't mean to limit splitting to type 1 nodes of the minimal tree which is only known after the search is done.
Actually PV splitting meant EXACTLY that. Assuming you search left to right, PV split ONLY splits along the left-most branch of the tree. Which both assumes that move ordering is perfect AND it is based on the YBW concept since the first branch from each node down the left-hand side is searched sequentially, and the rest are searched in parallel. Several of us used this in the late 70's and early 80's, but it doesn't scale very well at all with > 4 processors...
The point is that, in HGM's terminology, a PV node (say a node with beta == alpha+1) that fails high is not a PV node. It is/was only an "expected PV node". But PVSplit obviously still splits in such nodes and all papers refer to them as "PV nodes". Real trees are not perfectly ordered. PVSplit is/was used to search real trees. Likewise, it would have been silly if your DTS paper had consistently referred to "expected" PV, CUT, ALL nodes.

Before you start protesting something, I'll just note we are likely in complete agreement on this point.
User avatar
Evert
Posts: 2929
Joined: Sat Jan 22, 2011 12:42 am
Location: NL

Re: Cache over-writing and PV's

Post by Evert »

hgm wrote:In contrast to what you seem to think, the ability to meaningfully exchange information on this forum is essentially dependent on people attaching the same meaning to the same words.

So it was important to set the record straight about the meaning of PV node. A node isn't a PV-node because some programs happen to search it with a routine called Search_pv any more than that it is a "D-node" because micro-Max happens to search it with a routine called D().

This is not a point that is open to discussion, just a statement of fact. But it seems hard for you to leave it at that.
To be fair, whether something is an actual PV node or not is not something you know until after the search is done, so as a concept it is pretty useless for making any sort of decision during the search.

What is useful is to base the decision on whether you want an exact score for this node, or just a bound. In the first case, beta > alpha+1, otherwise beta=alpha+1. Obviously PV nodes are a subset of the first.

As for TT cut-offs, assuming the entry has sufficient depth, I always take the cut if the bounds tell me the score is reliably outside of [alpha, beta] (ie, upper bound or exact < alpha, lower bound or exact > beta). Generally I take the score for exact nodes that are within the window, but I've experimented with not taking the cut (to get a full PV). At some point I'll do this properly (using a Crafty-style path hash).
Rein Halbersma
Posts: 741
Joined: Tue May 22, 2007 11:13 am

Re: Cache over-writing and PV's

Post by Rein Halbersma »

Evert wrote: As for TT cut-offs, assuming the entry has sufficient depth, I always take the cut if the bounds tell me the score is reliably outside of [alpha, beta] (ie, upper bound or exact < alpha, lower bound or exact > beta).
Yes, that is exactly how I am doing it.
Generally I take the score for exact nodes that are within the window, but I've experimented with not taking the cut (to get a full PV). At some point I'll do this properly (using a Crafty-style path hash).
My current code does not take a cut from hash nodes with an exact score that falls within the alpha-beta window, indeed in order to get a full PV.
User avatar
hgm
Posts: 27790
Joined: Fri Mar 10, 2006 10:06 am
Location: Amsterdam
Full name: H G Muller

Re: Cache over-writing and PV's

Post by hgm »

Evert wrote:To be fair, whether something is an actual PV node or not is not something you know until after the search is done,
Or after you found its score with a suitable bound type in the TT! :!:

So when you find a lower-bound above beta or an upper bound below alpha, you know it is not a PV node to the depth recorded in the TT, and if that is sufficient for the requested search. So it would be madness to search at lower depth and take the same cutoff after that, and even more madness to search at lower depth with the hope that this result will be sufficiently wrong to get the opposite fail or make it a PV node after all. I don't think anyone does that, really.

And what they do is known as 'not taking hash cutoffs in PV nodes'. And the price is that you lose the possibility to look over the horizon by grafting.

That is really all that I have been saying, and why the professional trolls jump on it beats me...
User avatar
Evert
Posts: 2929
Joined: Sat Jan 22, 2011 12:42 am
Location: NL

Re: Cache over-writing and PV's

Post by Evert »

hgm wrote: And what they do is known as 'not taking hash cutoffs in PV nodes'. And the price is that you lose the possibility to look over the horizon by grafting.
I once did the following: after I returned from the search (with my triangular PV), I extended the PV with whatever the hashtable was able to graft onto it. This is something like a poor-man's version of Bob's path hash.

Worked ok, but doesn't really solve the problem it's meant to alleviate, so I eventually got rid of it.
bob
Posts: 20943
Joined: Mon Feb 27, 2006 7:30 pm
Location: Birmingham, AL

Re: Cache over-writing and PV's

Post by bob »

Evert wrote:
hgm wrote: And what they do is known as 'not taking hash cutoffs in PV nodes'. And the price is that you lose the possibility to look over the horizon by grafting.
I once did the following: after I returned from the search (with my triangular PV), I extended the PV with whatever the hashtable was able to graft onto it. This is something like a poor-man's version of Bob's path hash.

Worked ok, but doesn't really solve the problem it's meant to alleviate, so I eventually got rid of it.
The main point to me is that my "hash path" idea has no measurable cost, it is quite simple to implement, and it can solve the problem period. I still get short PVs because I have a max path length of 128 moves, and with this grafting, one can easily blow that in endgames. I'm not going to increase it further because I don't want to have a 4gb minimum memory size to run the thing. :)

But the idea really is simple to implement, and has no measurable performance cost. Ergo, why do anything else. I wrote this back when HGM, I and others were talking about the issue and the REALLY ugly solution of "no EXACT matches on PV nodes" which really makes no sense to me at all. Sort of like burning your hand on a hot stove, then reaching the same position a day later and deciding to try it again.