## Botvinnik Markov revisited

Discussion of chess software programming and technical issues.

Moderators: bob, hgm, Harvey Williamson

Forum rules
This textbox is used to restore diagrams posted with the [d] tag before the upgrade.
Don
Posts: 5106
Joined: Tue Apr 29, 2008 2:27 pm

### Re: Botvinnik Markov revisited

bob wrote:
Don wrote:
bob wrote:
Don wrote:
Dann Corbit wrote:
Don wrote:
Steve Maughan wrote:Hi Dann,
Dann Corbit wrote:As far as that goes -- some position x has 9 legal moves:

Joe examines the 9 possible moves and decides to extend move 4 by one ply.

Fred examines the 9 possible moves and decides to decrease all of them by one ply except move 4.

Essentially, they did exactly the same thing via opposite method.
I think this is helpful. However the problem with extensions is the risk of explosion. In the early days of Monarch I looked at all sorts of extensions. The thing that amazed me was how easy it is for a seemingly simple extension to explode the search.

In contrast if you focus on reductions there is no chance of a search explosion. This is the #1 reason I think reductions are better (and it's what I'm working on).

Cheers,

Steve
In some sense they are equivalent, the primary issue is how to build a tree that is highly variable in the length of it's subtrees which makes the program play incredibly strong. How this should be done is not relevant if it gets the job done.

An extension of one move can be thought of as a reduction in all the other moves. Of course the trick is making it all somehow work together.
The goal is just like a human playing chess. After all, what do we do?

We see something with real promise and then we follow along that line thinking about the possibilities [we extend]. Or, we see some horrible event and stop searching a line because of devastating counter play [we reduce]. So extensions and reductions are (in my view) an excellent model of real-world human chess.
Agreed.

I view reductions as a kind of pruning. Of course when you reduce you prune a lot of end nodes, but I like to think of it as pruning the actual move that is reduced (and the whole subtree below it), even though that is not technically true.

The really early chess programs did "hard" pruning. If a move was deemed unworthy it was forever lost and it was not resurrected on the next iteration. But programs still had to do extra work to determine which moves to prune and they typically created a plausible move list.

So when you reduce, you are really just applying a plausible move test to the moves, then throwing them out. The primary difference is that this test is improved on subsequent iterations - so it's scalable in a recursive way.
Some of us are doing what you call "hard pruning" (better known as forward-pruning) today as well.
I'm not talking about forward pruning as such, every program has that.

But some early programs had fixed rules about which moves to throw out and that was an absolute. For example I could construct a program that used see() on all nodes and just threw out every move that superficially looked losing - but my program would never be able to sacrifice a piece legitimately. So typically this kind of thing is done near end nodes or just in quies.

nsideration) but only near leaf nodes. For example Komodo does not consider non-capture moves in late quies, but those same moves that are pruned now will not be pruned when the search is deeper.

Today every program has forward pruning, but that is not the same as having fixed rules that a

is what I am talking about. Such a program is not scalable.

For example you might have a rule to never let a queen capture a pawn that is defended by an unpinned pawn - and the rule would work most of the time. Early programs had such rules and they were blindly applied. It's probably a good rule if your program is never going to do more than 2 full width ply. But such a rule is not good for a program that plays at 3000 strength if it's applied at ANY depth. It would make your program blind to any Q takes defended pawn sacrifice.

The solution of course is progressive pruning. Komodo throws out such moves if they are near leaf nodes, but not near the root.
There are only two types of pruning in the alpha/beta framework. The first is called "backward pruning" and is the basic alpha/beta pruning mechanism. The second is forward pruning, where you generate a set of moves, and then say "these will be searched and these are going to be tossed out." IE the shannon type-B strategy. I currently do backward pruning (obviously) since that is alpha/beta, I also do forward pruning where I physically throw out some moves and never search them at all, not even to reduced depth. And then I do the reductions and extensions stuff as well.

But I believe you are talking about classic forward pruning, which means you prune moves _before_ they are searched, where backward pruning uses results backed up the tree to prune (aka alpha/beta).

For forward pruning, it really doesn't matter whether you throw 'em out near the root or near the tips. Early programs had a tapered search that kept more near the root and pruned more near the tips. In Crafty I believe I am doing the forward pruning stuff in the last 4 plies, but would have to look to be certain, I wrote this code last year, tested it, and moved on to other things. I'm not certain about the details, just that it is there.
Hi Bob,

I'm sorry but I had accidentally included a bunch of material that I meant to edit out - I meant to keep only the first 2 paragraphs.

Yes, I agree with you completely about forward pruning. And yes most early programs tapered in some way but not all of them did, some very early (and weak) programs had absolute rules about pruning that were not depth based. And of course this was forward pruning - but if it's not tapered in some way it's wrong.

jwes
Posts: 778
Joined: Sat Jul 01, 2006 5:11 am

### Re: Botvinnik Markov revisited

bob wrote: There are only two types of pruning in the alpha/beta framework. The first is called "backward pruning" and is the basic alpha/beta pruning mechanism. The second is forward pruning, where you generate a set of moves, and then say "these will be searched and these are going to be tossed out." IE the shannon type-B strategy.
What would you call pruning where you search some move sufficiently to decide it is a stupid move rather than a sacrifice and then mark it so that on future iterations you do not search it again?

Dann Corbit
Posts: 10207
Joined: Wed Mar 08, 2006 7:57 pm
Location: Redmond, WA USA
Contact:

### Re: Botvinnik Markov revisited

jwes wrote:
bob wrote: There are only two types of pruning in the alpha/beta framework. The first is called "backward pruning" and is the basic alpha/beta pruning mechanism. The second is forward pruning, where you generate a set of moves, and then say "these will be searched and these are going to be tossed out." IE the shannon type-B strategy.
What would you call pruning where you search some move sufficiently to decide it is a stupid move rather than a sacrifice and then mark it so that on future iterations you do not search it again?
An anti-killer

bob
Posts: 20642
Joined: Mon Feb 27, 2006 6:30 pm
Location: Birmingham, AL

### Re: Botvinnik Markov revisited

jwes wrote:
bob wrote: There are only two types of pruning in the alpha/beta framework. The first is called "backward pruning" and is the basic alpha/beta pruning mechanism. The second is forward pruning, where you generate a set of moves, and then say "these will be searched and these are going to be tossed out." IE the shannon type-B strategy.
What would you call pruning where you search some move sufficiently to decide it is a stupid move rather than a sacrifice and then mark it so that on future iterations you do not search it again?
Any case where you decide at point P in the tree that you are not going to search that move at all is "forward pruning" because you are pruning on your way down the tree, not on the way back up. There are lots of ways to do this. You can use static evaluation to toss moves out (very risky, particularly as you get closer to the root, because once a move is thrown out, you will never know anything about it since it is not searched.) You can use a shallow search and keep the best N to search and throw the rest out. You can use other ideas such as history counters or whatever. But whatever you do, if you toss 'em out on the way down, you take a big risk. Tossing them out on the way back is less risky as that is what alpha/beta does.

jwes
Posts: 778
Joined: Sat Jul 01, 2006 5:11 am

### Re: Botvinnik Markov revisited

Dann Corbit wrote:
jwes wrote:
bob wrote: There are only two types of pruning in the alpha/beta framework. The first is called "backward pruning" and is the basic alpha/beta pruning mechanism. The second is forward pruning, where you generate a set of moves, and then say "these will be searched and these are going to be tossed out." IE the shannon type-B strategy.
What would you call pruning where you search some move sufficiently to decide it is a stupid move rather than a sacrifice and then mark it so that on future iterations you do not search it again?
An anti-killer
I assume you mean a list of moves to avoid like killer moves. This is an interesting idea but not my idea. I was thinking of storing a flag in the TT that says don't search this position.

Dann Corbit
Posts: 10207
Joined: Wed Mar 08, 2006 7:57 pm
Location: Redmond, WA USA
Contact:

### Re: Botvinnik Markov revisited

jwes wrote:
Dann Corbit wrote:
jwes wrote:
bob wrote: There are only two types of pruning in the alpha/beta framework. The first is called "backward pruning" and is the basic alpha/beta pruning mechanism. The second is forward pruning, where you generate a set of moves, and then say "these will be searched and these are going to be tossed out." IE the shannon type-B strategy.
What would you call pruning where you search some move sufficiently to decide it is a stupid move rather than a sacrifice and then mark it so that on future iterations you do not search it again?
An anti-killer
I assume you mean a list of moves to avoid like killer moves. This is an interesting idea but not my idea. I was thinking of storing a flag in the TT that says don't search this position.
Instead of simply not searching the move, I would search the move with a reduction that is a function of choices available. For instance, if you only have two legal moves, I would not reduce at all, but instead search them all to full depth, unless one is a clearly lost position and in that case I will move the other move immediately.

On the other hand, as we have more and more choices available, we can prune more and more aggressively. For instance, if a given move loses a queen immediately, and we have 70 possible move choices, I think we can aggressively prune that one because we have lots of alternatives and probably some of them are good.

I think a "simply don't search this" position is too dangerous, especially if we have very few move choices or we are in zugzwang.

Don
Posts: 5106
Joined: Tue Apr 29, 2008 2:27 pm

### Re: Botvinnik Markov revisited

jwes wrote:
bob wrote: There are only two types of pruning in the alpha/beta framework. The first is called "backward pruning" and is the basic alpha/beta pruning mechanism. The second is forward pruning, where you generate a set of moves, and then say "these will be searched and these are going to be tossed out." IE the shannon type-B strategy.
What would you call pruning where you search some move sufficiently to decide it is a stupid move rather than a sacrifice and then mark it so that on future iterations you do not search it again?
That's the kind of pruning I am talking about, a move that is permanently marked as "never play". But it's STILL called forward pruning.

I would also call it a bad idea. Do any programs do this?

You can never be sure that a move that looks really bad actually is a very good move and a scalable program should never say never, although it's correct to give them much less consideration. But on a future iteration it should be possible to discover anything.

If the score is perfect, of course this does not apply because we need not search perfect moves, such as moves that reference an endgame database or simple ending that can be marked as a draw.

Uri Blass
Posts: 8611
Joined: Wed Mar 08, 2006 11:37 pm
Location: Tel-Aviv Israel

### Re: Botvinnik Markov revisited

Don wrote:
jwes wrote:
bob wrote: There are only two types of pruning in the alpha/beta framework. The first is called "backward pruning" and is the basic alpha/beta pruning mechanism. The second is forward pruning, where you generate a set of moves, and then say "these will be searched and these are going to be tossed out." IE the shannon type-B strategy.
What would you call pruning where you search some move sufficiently to decide it is a stupid move rather than a sacrifice and then mark it so that on future iterations you do not search it again?
That's the kind of pruning I am talking about, a move that is permanently marked as "never play". But it's STILL called forward pruning.

I would also call it a bad idea. Do any programs do this?

You can never be sure that a move that looks really bad actually is a very good move and a scalable program should never say never, although it's correct to give them much less consideration. But on a future iteration it should be possible to discover anything.

If the score is perfect, of course this does not apply because we need not search perfect moves, such as moves that reference an endgame database or simple ending that can be marked as a draw.
Yes

Rybka prune bishop underpromotions and
the idea is not bad for games(it may be bad for solving some studies).

Some other programs even prune all underpromotions.

I think that it is possible to make productive hard pruning based on evaluation.

There are many positions that I can evaluate with 100% confidence without search(part of them as win for one side and part of them as at least a draw for one side) and it may be possible to teach some chess program to make the same evaluations.

These evaluation can be used for pruning because as long as the computer evaluate black as better there is no point in searching lines when you are 100% sure that white does not lose.

I can give you some positions:

1)win for white
[D]4k3/p7/8/8/P3P3/1P1P4/2P5/R5K1 w - - 0 1

2)win or draw for white

[D]5rk1/5p1p/8/8/8/8/5PPP/R5K1 w - - 0 1

3)draw
[D]5bk1/5ppp/8/8/8/8/5PPP/5BK1 w - - 0 1

4)win or draw for white and not based on material(white has a perpetual check if he wants a draw)

[D]6k1/qb4p1/6Q1/8/8/8/5NPP/6K1 w - - 0 1

Uri

Don
Posts: 5106
Joined: Tue Apr 29, 2008 2:27 pm

### Re: Botvinnik Markov revisited

Uri Blass wrote:
Don wrote:
jwes wrote:
bob wrote: There are only two types of pruning in the alpha/beta framework. The first is called "backward pruning" and is the basic alpha/beta pruning mechanism. The second is forward pruning, where you generate a set of moves, and then say "these will be searched and these are going to be tossed out." IE the shannon type-B strategy.
What would you call pruning where you search some move sufficiently to decide it is a stupid move rather than a sacrifice and then mark it so that on future iterations you do not search it again?
That's the kind of pruning I am talking about, a move that is permanently marked as "never play". But it's STILL called forward pruning.

I would also call it a bad idea. Do any programs do this?

You can never be sure that a move that looks really bad actually is a very good move and a scalable program should never say never, although it's correct to give them much less consideration. But on a future iteration it should be possible to discover anything.

If the score is perfect, of course this does not apply because we need not search perfect moves, such as moves that reference an endgame database or simple ending that can be marked as a draw.
Yes

Rybka prune bishop underpromotions and
the idea is not bad for games(it may be bad for solving some studies).
I think that any "hard" pruning where you don't know the evaluation for sure is technically wrong, it's just something that cannot be shown to be wrong in any practical sense. Basically when you reduce a move you are saying that it is only worthy of a tiny fraction of your computational resources and it's certainly the case the under-promotions deserve some consideration - even if the amount of consideration it deserves is ridiculously low. By not including them at all we we are essentially saying that they are always bad moves and that simply isn't true. So by having this rule you have a program that just cannot solve certain position EVER, no matter what kind of massive hardware it's running on.

What we can say, is that something like a bishop under-promotion is so unlikely to be the best move that that it deserves only a fraction of a percentage of consideration - which might mean it only pays off to test for it when the depth is really large using a really shallow search to see if it's feasible.

Of course all of this is theoretical, as I say for all practical purposes we can ignore them and never notice the difference in ELO rating - but I argue that it probably should be considered with a highly reduced depth search when the level is deep enough. You must have that to have a program that has perfect scaling properties.

Some other programs even prune all underpromotions.
Komodo never looks at rook or bishop under promotions and only looks at knight under promotions when it gives check. But even for the knight case there are positions where my rule won't find the right move.

I think that it is possible to make productive hard pruning based on evaluation.

There are many positions that I can evaluate with 100% confidence without search(part of them as win for one side and part of them as at least a draw for one side) and it may be possible to teach some chess program to make the same evaluations.
Some simple endings can be evaluated as draws immediately. And I think it might be possible to construct some proofs about some other positions. But there are very few positions like this that can be correctly evaluated in such an absolute way.

Another class of positions are ones that CAN be evaluated correctly with a small search - namely the ones that end in forced mates or stalemates or endgame database positions.

But I'm trying to leave all these out of the discussion - it goes without saying that any position that has the answer stored in a database does not need to be searched. And this would apply recursively to any position that can be searched to a database position or to a checkmate or stalemate.

You can sometimes detect checkmate without a search, but that is semantics - you are still essentially constructing a proof which is a kind of search in itself.

These evaluation can be used for pruning because as long as the computer evaluate black as better there is no point in searching lines when you are 100% sure that white does not lose.
I guess I am going to argue that unless you can actually prove it's a win, then it should not be hard pruned. I'm not sure you can prove all of those positions are wins, and if you cannot prove it then your confidence may be ridiculously close to 100% but it's not exactly 100%.

As a thought experiment, if you had never seen the first position and your very life depending on getting the right answer, would it make you take an extra second or two in order to answer? If yes, then you are not really 100% certain, you just think you are.

And it's much worse for computers. The simple answer is that if you have a way for the computer to discriminate 99.9999 certainty about positions, then reduce those positions by many ply. Then you have a search that is 100% correct in the sense that no position cannot eventually be solved given enough time. If you do not consider bishop under-promotions then there are some positions your program can never solve regardless of time or hardware.

Of course I know that in a practical sense if you can get 99.999999 percent certainty you can just throw the move out and nobody will ever notice that you made the program weaker by some fraction of an ELO.

I can give you some positions:

1)win for white
[D]4k3/p7/8/8/P3P3/1P1P4/2P5/R5K1 w - - 0 1

2)win or draw for white

[D]5rk1/5p1p/8/8/8/8/5PPP/R5K1 w - - 0 1

3)draw
[D]5bk1/5ppp/8/8/8/8/5PPP/5BK1 w - - 0 1

4)win or draw for white and not based on material(white has a perpetual check if he wants a draw)

[D]6k1/qb4p1/6Q1/8/8/8/5NPP/6K1 w - - 0 1

Uri

Don
Posts: 5106
Joined: Tue Apr 29, 2008 2:27 pm

### Re: Botvinnik Markov revisited

Uri,

One other issue here is that you are rarely comparing 2 positions which are so clear cut unless you are searching a dead lost (or dead won) position.

Chess programs basically only compare positions that are very similar. In fact that is probably why they are as good as they are - it's not easy to accurately evaluate 1 position, but it's a lot easier to compare 2 very similar positions and say which is better - which is what chess program mostly do.

When chess programs blunder it's because they did not properly compare the relative values of positions.

So being able to properly evaluate a set of extreme positions as wins or losses is almost worthless. If you could do this with positions that look relatively even and ordinary - they you would have something.

Don

Uri Blass wrote:
Don wrote:
jwes wrote:
bob wrote: There are only two types of pruning in the alpha/beta framework. The first is called "backward pruning" and is the basic alpha/beta pruning mechanism. The second is forward pruning, where you generate a set of moves, and then say "these will be searched and these are going to be tossed out." IE the shannon type-B strategy.
What would you call pruning where you search some move sufficiently to decide it is a stupid move rather than a sacrifice and then mark it so that on future iterations you do not search it again?
That's the kind of pruning I am talking about, a move that is permanently marked as "never play". But it's STILL called forward pruning.

I would also call it a bad idea. Do any programs do this?

You can never be sure that a move that looks really bad actually is a very good move and a scalable program should never say never, although it's correct to give them much less consideration. But on a future iteration it should be possible to discover anything.

If the score is perfect, of course this does not apply because we need not search perfect moves, such as moves that reference an endgame database or simple ending that can be marked as a draw.
Yes

Rybka prune bishop underpromotions and
the idea is not bad for games(it may be bad for solving some studies).

Some other programs even prune all underpromotions.

I think that it is possible to make productive hard pruning based on evaluation.

There are many positions that I can evaluate with 100% confidence without search(part of them as win for one side and part of them as at least a draw for one side) and it may be possible to teach some chess program to make the same evaluations.

These evaluation can be used for pruning because as long as the computer evaluate black as better there is no point in searching lines when you are 100% sure that white does not lose.

I can give you some positions:

1)win for white
[D]4k3/p7/8/8/P3P3/1P1P4/2P5/R5K1 w - - 0 1

2)win or draw for white

[D]5rk1/5p1p/8/8/8/8/5PPP/R5K1 w - - 0 1

3)draw
[D]5bk1/5ppp/8/8/8/8/5PPP/5BK1 w - - 0 1

4)win or draw for white and not based on material(white has a perpetual check if he wants a draw)

[D]6k1/qb4p1/6Q1/8/8/8/5NPP/6K1 w - - 0 1

Uri