Hi Bob,bob wrote:There are only two types of pruning in the alpha/beta framework. The first is called "backward pruning" and is the basic alpha/beta pruning mechanism. The second is forward pruning, where you generate a set of moves, and then say "these will be searched and these are going to be tossed out." IE the shannon type-B strategy. I currently do backward pruning (obviously) since that is alpha/beta, I also do forward pruning where I physically throw out some moves and never search them at all, not even to reduced depth. And then I do the reductions and extensions stuff as well.Don wrote:I'm not talking about forward pruning as such, every program has that.bob wrote:Some of us are doing what you call "hard pruning" (better known as forward-pruning) today as well.Don wrote:Agreed.Dann Corbit wrote:The goal is just like a human playing chess. After all, what do we do?Don wrote:In some sense they are equivalent, the primary issue is how to build a tree that is highly variable in the length of it's subtrees which makes the program play incredibly strong. How this should be done is not relevant if it gets the job done.Steve Maughan wrote:Hi Dann,
I think this is helpful. However the problem with extensions is the risk of explosion. In the early days of Monarch I looked at all sorts of extensions. The thing that amazed me was how easy it is for a seemingly simple extension to explode the search.Dann Corbit wrote:As far as that goes -- some position x has 9 legal moves:
Joe examines the 9 possible moves and decides to extend move 4 by one ply.
Fred examines the 9 possible moves and decides to decrease all of them by one ply except move 4.
Essentially, they did exactly the same thing via opposite method.
In contrast if you focus on reductions there is no chance of a search explosion. This is the #1 reason I think reductions are better (and it's what I'm working on).
Cheers,
Steve
An extension of one move can be thought of as a reduction in all the other moves. Of course the trick is making it all somehow work together.
We see something with real promise and then we follow along that line thinking about the possibilities [we extend]. Or, we see some horrible event and stop searching a line because of devastating counter play [we reduce]. So extensions and reductions are (in my view) an excellent model of real-world human chess.
I view reductions as a kind of pruning. Of course when you reduce you prune a lot of end nodes, but I like to think of it as pruning the actual move that is reduced (and the whole subtree below it), even though that is not technically true.
The really early chess programs did "hard" pruning. If a move was deemed unworthy it was forever lost and it was not resurrected on the next iteration. But programs still had to do extra work to determine which moves to prune and they typically created a plausible move list.
So when you reduce, you are really just applying a plausible move test to the moves, then throwing them out. The primary difference is that this test is improved on subsequent iterations - so it's scalable in a recursive way.
But some early programs had fixed rules about which moves to throw out and that was an absolute. For example I could construct a program that used see() on all nodes and just threw out every move that superficially looked losing - but my program would never be able to sacrifice a piece legitimately. So typically this kind of thing is done near end nodes or just in quies.
nsideration) but only near leaf nodes. For example Komodo does not consider non-capture moves in late quies, but those same moves that are pruned now will not be pruned when the search is deeper.
Today every program has forward pruning, but that is not the same as having fixed rules that a
is what I am talking about. Such a program is not scalable.
For example you might have a rule to never let a queen capture a pawn that is defended by an unpinned pawn - and the rule would work most of the time. Early programs had such rules and they were blindly applied. It's probably a good rule if your program is never going to do more than 2 full width ply. But such a rule is not good for a program that plays at 3000 strength if it's applied at ANY depth. It would make your program blind to any Q takes defended pawn sacrifice.
The solution of course is progressive pruning. Komodo throws out such moves if they are near leaf nodes, but not near the root.
But I believe you are talking about classic forward pruning, which means you prune moves _before_ they are searched, where backward pruning uses results backed up the tree to prune (aka alpha/beta).
For forward pruning, it really doesn't matter whether you throw 'em out near the root or near the tips. Early programs had a tapered search that kept more near the root and pruned more near the tips. In Crafty I believe I am doing the forward pruning stuff in the last 4 plies, but would have to look to be certain, I wrote this code last year, tested it, and moved on to other things. I'm not certain about the details, just that it is there.
I'm sorry but I had accidentally included a bunch of material that I meant to edit out - I meant to keep only the first 2 paragraphs.
Yes, I agree with you completely about forward pruning. And yes most early programs tapered in some way but not all of them did, some very early (and weak) programs had absolute rules about pruning that were not depth based. And of course this was forward pruning - but if it's not tapered in some way it's wrong.