Botvinnik Markov revisited

Discussion of chess software programming and technical issues.

Moderators: hgm, Rebel, chrisw

Edmund
Posts: 670
Joined: Mon Dec 03, 2007 3:01 pm
Location: Barcelona, Spain

Re: Botvinnik Markov revisited

Post by Edmund »

Dann Corbit wrote:
Don wrote:
Dann Corbit wrote:
Don wrote:
Dann Corbit wrote:
Don wrote:
Dann Corbit wrote:
Don wrote:What is the state of the art on Botvinnik Markov extensions?

I'm experimenting with them a bit now and I researched the idea on the chessprogramming wiki and other places and saw a lot of enthusiasm for the idea, but I'm not aware that anyone is currently using it.

I don't remember seeing it in Stockfish for example, but some of the old posts indicated that Tord was getting some good results based on a test match and he seemed fairly enthusiastic about it.

So what's the deal?
Smarthink puts a lot of energy into extensions and it works well for him.
Extensions in general, or this specific extension?

I'm running a test which indicates that it is worth a few elo.
I have seen the code and so I think it is not proper for me to answer with any details because it is private code. Perhaps S.Markov will chime in eventually.
I'm not asking for details. Are you saying that he put a lot of energy into this specific extension or just extensions in general?
Let me put it this way:
No program I have ever seen puts nearly so much effort into extensions (e.g. do you know any programs that have seven distinct categories for extensions?).
If we are to interpret the Botvinik Markov extension as using null move to detect threats and backing up two plies on the move stack then he definitely does that.
http://www.stmintz.com/ccc/index.php?id=318839

Code: Select all

if(SameTarget(ThreatMove[Ply],ThreatMove[Ply-2])) extend_flag=true;
Of course, there is a lot more to it than that.

For specifics about effectiveness or implementation details, I think you will need to get S.Markov to comment.
I'm a firm believer that there is more than one way to skin a cat.
As far as that goes -- some position x has 9 legal moves:
Joe examines the 9 possible moves and decides to extend move 4 by one ply.

Fred examines the 9 possible moves and decides to decrease all of them by one ply except move 4.

Essentially, they did exactly the same thing via opposite method.

Poor kitty.
This logic only applies for root moves or if it is applied throughout the search ... not just for "some position"
User avatar
Steve Maughan
Posts: 1221
Joined: Wed Mar 08, 2006 8:28 pm
Location: Florida, USA

Re: Botvinnik Markov revisited

Post by Steve Maughan »

Hi Dann,
Dann Corbit wrote:As far as that goes -- some position x has 9 legal moves:

Joe examines the 9 possible moves and decides to extend move 4 by one ply.

Fred examines the 9 possible moves and decides to decrease all of them by one ply except move 4.

Essentially, they did exactly the same thing via opposite method.
I think this is helpful. However the problem with extensions is the risk of explosion. In the early days of Monarch I looked at all sorts of extensions. The thing that amazed me was how easy it is for a seemingly simple extension to explode the search.

In contrast if you focus on reductions there is no chance of a search explosion. This is the #1 reason I think reductions are better (and it's what I'm working on).

Cheers,

Steve
bob
Posts: 20943
Joined: Mon Feb 27, 2006 7:30 pm
Location: Birmingham, AL

Re: Botvinnik Markov revisited

Post by bob »

Steve Maughan wrote:Hi Dann,
Dann Corbit wrote:As far as that goes -- some position x has 9 legal moves:

Joe examines the 9 possible moves and decides to extend move 4 by one ply.

Fred examines the 9 possible moves and decides to decrease all of them by one ply except move 4.

Essentially, they did exactly the same thing via opposite method.
I think this is helpful. However the problem with extensions is the risk of explosion. In the early days of Monarch I looked at all sorts of extensions. The thing that amazed me was how easy it is for a seemingly simple extension to explode the search.

In contrast if you focus on reductions there is no chance of a search explosion. This is the #1 reason I think reductions are better (and it's what I'm working on).

Cheers,

Steve
Why do you say it can't explode? If you search one branch to X+1 and the rest to X, whether you extend the X+1 branch or reduce all the others to X, you can get into exactly the same problem, where X+1 takes forever. One can just as easily trigger a flurry of "don't reduce this" just as easily as one can trigger a flurry of "extend this" type circumstances..
User avatar
Don
Posts: 5106
Joined: Tue Apr 29, 2008 4:27 pm

Re: Botvinnik Markov revisited

Post by Don »

Steve Maughan wrote:Hi Dann,
Dann Corbit wrote:As far as that goes -- some position x has 9 legal moves:

Joe examines the 9 possible moves and decides to extend move 4 by one ply.

Fred examines the 9 possible moves and decides to decrease all of them by one ply except move 4.

Essentially, they did exactly the same thing via opposite method.
I think this is helpful. However the problem with extensions is the risk of explosion. In the early days of Monarch I looked at all sorts of extensions. The thing that amazed me was how easy it is for a seemingly simple extension to explode the search.

In contrast if you focus on reductions there is no chance of a search explosion. This is the #1 reason I think reductions are better (and it's what I'm working on).

Cheers,

Steve
In some sense they are equivalent, the primary issue is how to build a tree that is highly variable in the length of it's subtrees which makes the program play incredibly strong. How this should be done is not relevant if it gets the job done.

An extension of one move can be thought of as a reduction in all the other moves. Of course the trick is making it all somehow work together.
User avatar
Don
Posts: 5106
Joined: Tue Apr 29, 2008 4:27 pm

Re: Botvinnik Markov revisited

Post by Don »

bob wrote:
Steve Maughan wrote:Hi Dann,
Dann Corbit wrote:As far as that goes -- some position x has 9 legal moves:

Joe examines the 9 possible moves and decides to extend move 4 by one ply.

Fred examines the 9 possible moves and decides to decrease all of them by one ply except move 4.

Essentially, they did exactly the same thing via opposite method.
I think this is helpful. However the problem with extensions is the risk of explosion. In the early days of Monarch I looked at all sorts of extensions. The thing that amazed me was how easy it is for a seemingly simple extension to explode the search.

In contrast if you focus on reductions there is no chance of a search explosion. This is the #1 reason I think reductions are better (and it's what I'm working on).

Cheers,

Steve
Why do you say it can't explode? If you search one branch to X+1 and the rest to X, whether you extend the X+1 branch or reduce all the others to X, you can get into exactly the same problem, where X+1 takes forever. One can just as easily trigger a flurry of "don't reduce this" just as easily as one can trigger a flurry of "extend this" type circumstances..
Bob is exactly right, but it's not quite so simple either ...

At first it might appear that giving a full ply extension is exactly the same as giving every move but one a full ply reduction, but that is not the case because of iterative deepening - with extensions you iterate by more than 1 ply. With reductions, you iterate by EXACTLY one ply.

To make this clear, a program that does extensions can discover things on the next iteration that requires more than 1 additional ply to discover, but a program that (only) reduces can never do this. So whether you extend or reduce affects the actual meaning of what it is to do an additional search iteration.

So in that sense they are not equivalent, and reductions are much more easily controlled. Every good program seems to do a mixture of both.

However, I agree with Bob, it's certainly possible to do reductions wrong in the sense that you have "blown up" the search in a theoretical sense. The behavior you would see is a program that iterated very quickly, but required enormous depths to see relatively simple tactics. This kind of search is essentially "blowing up" although we don't normally define it that way.
Dann Corbit
Posts: 12541
Joined: Wed Mar 08, 2006 8:57 pm
Location: Redmond, WA USA

Re: Botvinnik Markov revisited

Post by Dann Corbit »

Don wrote:
Steve Maughan wrote:Hi Dann,
Dann Corbit wrote:As far as that goes -- some position x has 9 legal moves:

Joe examines the 9 possible moves and decides to extend move 4 by one ply.

Fred examines the 9 possible moves and decides to decrease all of them by one ply except move 4.

Essentially, they did exactly the same thing via opposite method.
I think this is helpful. However the problem with extensions is the risk of explosion. In the early days of Monarch I looked at all sorts of extensions. The thing that amazed me was how easy it is for a seemingly simple extension to explode the search.

In contrast if you focus on reductions there is no chance of a search explosion. This is the #1 reason I think reductions are better (and it's what I'm working on).

Cheers,

Steve
In some sense they are equivalent, the primary issue is how to build a tree that is highly variable in the length of it's subtrees which makes the program play incredibly strong. How this should be done is not relevant if it gets the job done.

An extension of one move can be thought of as a reduction in all the other moves. Of course the trick is making it all somehow work together.
The goal is just like a human playing chess. After all, what do we do?

We see something with real promise and then we follow along that line thinking about the possibilities [we extend]. Or, we see some horrible event and stop searching a line because of devastating counter play [we reduce]. So extensions and reductions are (in my view) an excellent model of real-world human chess.
User avatar
Don
Posts: 5106
Joined: Tue Apr 29, 2008 4:27 pm

Re: Botvinnik Markov revisited

Post by Don »

Dann Corbit wrote:
Don wrote:
Steve Maughan wrote:Hi Dann,
Dann Corbit wrote:As far as that goes -- some position x has 9 legal moves:

Joe examines the 9 possible moves and decides to extend move 4 by one ply.

Fred examines the 9 possible moves and decides to decrease all of them by one ply except move 4.

Essentially, they did exactly the same thing via opposite method.
I think this is helpful. However the problem with extensions is the risk of explosion. In the early days of Monarch I looked at all sorts of extensions. The thing that amazed me was how easy it is for a seemingly simple extension to explode the search.

In contrast if you focus on reductions there is no chance of a search explosion. This is the #1 reason I think reductions are better (and it's what I'm working on).

Cheers,

Steve
In some sense they are equivalent, the primary issue is how to build a tree that is highly variable in the length of it's subtrees which makes the program play incredibly strong. How this should be done is not relevant if it gets the job done.

An extension of one move can be thought of as a reduction in all the other moves. Of course the trick is making it all somehow work together.
The goal is just like a human playing chess. After all, what do we do?

We see something with real promise and then we follow along that line thinking about the possibilities [we extend]. Or, we see some horrible event and stop searching a line because of devastating counter play [we reduce]. So extensions and reductions are (in my view) an excellent model of real-world human chess.
Agreed.

I view reductions as a kind of pruning. Of course when you reduce you prune a lot of end nodes, but I like to think of it as pruning the actual move that is reduced (and the whole subtree below it), even though that is not technically true.

The really early chess programs did "hard" pruning. If a move was deemed unworthy it was forever lost and it was not resurrected on the next iteration. But programs still had to do extra work to determine which moves to prune and they typically created a plausible move list.

So when you reduce, you are really just applying a plausible move test to the moves, then throwing them out. The primary difference is that this test is improved on subsequent iterations - so it's scalable in a recursive way.
bob
Posts: 20943
Joined: Mon Feb 27, 2006 7:30 pm
Location: Birmingham, AL

Re: Botvinnik Markov revisited

Post by bob »

Don wrote:
Dann Corbit wrote:
Don wrote:
Steve Maughan wrote:Hi Dann,
Dann Corbit wrote:As far as that goes -- some position x has 9 legal moves:

Joe examines the 9 possible moves and decides to extend move 4 by one ply.

Fred examines the 9 possible moves and decides to decrease all of them by one ply except move 4.

Essentially, they did exactly the same thing via opposite method.
I think this is helpful. However the problem with extensions is the risk of explosion. In the early days of Monarch I looked at all sorts of extensions. The thing that amazed me was how easy it is for a seemingly simple extension to explode the search.

In contrast if you focus on reductions there is no chance of a search explosion. This is the #1 reason I think reductions are better (and it's what I'm working on).

Cheers,

Steve
In some sense they are equivalent, the primary issue is how to build a tree that is highly variable in the length of it's subtrees which makes the program play incredibly strong. How this should be done is not relevant if it gets the job done.

An extension of one move can be thought of as a reduction in all the other moves. Of course the trick is making it all somehow work together.
The goal is just like a human playing chess. After all, what do we do?

We see something with real promise and then we follow along that line thinking about the possibilities [we extend]. Or, we see some horrible event and stop searching a line because of devastating counter play [we reduce]. So extensions and reductions are (in my view) an excellent model of real-world human chess.
Agreed.

I view reductions as a kind of pruning. Of course when you reduce you prune a lot of end nodes, but I like to think of it as pruning the actual move that is reduced (and the whole subtree below it), even though that is not technically true.

The really early chess programs did "hard" pruning. If a move was deemed unworthy it was forever lost and it was not resurrected on the next iteration. But programs still had to do extra work to determine which moves to prune and they typically created a plausible move list.

So when you reduce, you are really just applying a plausible move test to the moves, then throwing them out. The primary difference is that this test is improved on subsequent iterations - so it's scalable in a recursive way.
Some of us are doing what you call "hard pruning" (better known as forward-pruning) today as well. :)
User avatar
Don
Posts: 5106
Joined: Tue Apr 29, 2008 4:27 pm

Re: Botvinnik Markov revisited

Post by Don »

bob wrote:
Don wrote:
Dann Corbit wrote:
Don wrote:
Steve Maughan wrote:Hi Dann,
Dann Corbit wrote:As far as that goes -- some position x has 9 legal moves:

Joe examines the 9 possible moves and decides to extend move 4 by one ply.

Fred examines the 9 possible moves and decides to decrease all of them by one ply except move 4.

Essentially, they did exactly the same thing via opposite method.
I think this is helpful. However the problem with extensions is the risk of explosion. In the early days of Monarch I looked at all sorts of extensions. The thing that amazed me was how easy it is for a seemingly simple extension to explode the search.

In contrast if you focus on reductions there is no chance of a search explosion. This is the #1 reason I think reductions are better (and it's what I'm working on).

Cheers,

Steve
In some sense they are equivalent, the primary issue is how to build a tree that is highly variable in the length of it's subtrees which makes the program play incredibly strong. How this should be done is not relevant if it gets the job done.

An extension of one move can be thought of as a reduction in all the other moves. Of course the trick is making it all somehow work together.
The goal is just like a human playing chess. After all, what do we do?

We see something with real promise and then we follow along that line thinking about the possibilities [we extend]. Or, we see some horrible event and stop searching a line because of devastating counter play [we reduce]. So extensions and reductions are (in my view) an excellent model of real-world human chess.
Agreed.

I view reductions as a kind of pruning. Of course when you reduce you prune a lot of end nodes, but I like to think of it as pruning the actual move that is reduced (and the whole subtree below it), even though that is not technically true.

The really early chess programs did "hard" pruning. If a move was deemed unworthy it was forever lost and it was not resurrected on the next iteration. But programs still had to do extra work to determine which moves to prune and they typically created a plausible move list.

So when you reduce, you are really just applying a plausible move test to the moves, then throwing them out. The primary difference is that this test is improved on subsequent iterations - so it's scalable in a recursive way.
Some of us are doing what you call "hard pruning" (better known as forward-pruning) today as well. :)
I'm not talking about forward pruning as such, every program has that.

But some early programs had fixed rules about which moves to throw out and that was an absolute. For example I could construct a program that used see() on all nodes and just threw out every move that superficially looked losing - but my program would never be able to sacrifice a piece legitimately. So typically this kind of thing is done near end nodes or just in quies.






nsideration) but only near leaf nodes. For example Komodo does not consider non-capture moves in late quies, but those same moves that are pruned now will not be pruned when the search is deeper.



Today every program has forward pruning, but that is not the same as having fixed rules that a

is what I am talking about. Such a program is not scalable.

For example you might have a rule to never let a queen capture a pawn that is defended by an unpinned pawn - and the rule would work most of the time. Early programs had such rules and they were blindly applied. It's probably a good rule if your program is never going to do more than 2 full width ply. But such a rule is not good for a program that plays at 3000 strength if it's applied at ANY depth. It would make your program blind to any Q takes defended pawn sacrifice.

The solution of course is progressive pruning. Komodo throws out such moves if they are near leaf nodes, but not near the root.
bob
Posts: 20943
Joined: Mon Feb 27, 2006 7:30 pm
Location: Birmingham, AL

Re: Botvinnik Markov revisited

Post by bob »

Don wrote:
bob wrote:
Don wrote:
Dann Corbit wrote:
Don wrote:
Steve Maughan wrote:Hi Dann,
Dann Corbit wrote:As far as that goes -- some position x has 9 legal moves:

Joe examines the 9 possible moves and decides to extend move 4 by one ply.

Fred examines the 9 possible moves and decides to decrease all of them by one ply except move 4.

Essentially, they did exactly the same thing via opposite method.
I think this is helpful. However the problem with extensions is the risk of explosion. In the early days of Monarch I looked at all sorts of extensions. The thing that amazed me was how easy it is for a seemingly simple extension to explode the search.

In contrast if you focus on reductions there is no chance of a search explosion. This is the #1 reason I think reductions are better (and it's what I'm working on).

Cheers,

Steve
In some sense they are equivalent, the primary issue is how to build a tree that is highly variable in the length of it's subtrees which makes the program play incredibly strong. How this should be done is not relevant if it gets the job done.

An extension of one move can be thought of as a reduction in all the other moves. Of course the trick is making it all somehow work together.
The goal is just like a human playing chess. After all, what do we do?

We see something with real promise and then we follow along that line thinking about the possibilities [we extend]. Or, we see some horrible event and stop searching a line because of devastating counter play [we reduce]. So extensions and reductions are (in my view) an excellent model of real-world human chess.
Agreed.

I view reductions as a kind of pruning. Of course when you reduce you prune a lot of end nodes, but I like to think of it as pruning the actual move that is reduced (and the whole subtree below it), even though that is not technically true.

The really early chess programs did "hard" pruning. If a move was deemed unworthy it was forever lost and it was not resurrected on the next iteration. But programs still had to do extra work to determine which moves to prune and they typically created a plausible move list.

So when you reduce, you are really just applying a plausible move test to the moves, then throwing them out. The primary difference is that this test is improved on subsequent iterations - so it's scalable in a recursive way.
Some of us are doing what you call "hard pruning" (better known as forward-pruning) today as well. :)
I'm not talking about forward pruning as such, every program has that.

But some early programs had fixed rules about which moves to throw out and that was an absolute. For example I could construct a program that used see() on all nodes and just threw out every move that superficially looked losing - but my program would never be able to sacrifice a piece legitimately. So typically this kind of thing is done near end nodes or just in quies.






nsideration) but only near leaf nodes. For example Komodo does not consider non-capture moves in late quies, but those same moves that are pruned now will not be pruned when the search is deeper.



Today every program has forward pruning, but that is not the same as having fixed rules that a

is what I am talking about. Such a program is not scalable.

For example you might have a rule to never let a queen capture a pawn that is defended by an unpinned pawn - and the rule would work most of the time. Early programs had such rules and they were blindly applied. It's probably a good rule if your program is never going to do more than 2 full width ply. But such a rule is not good for a program that plays at 3000 strength if it's applied at ANY depth. It would make your program blind to any Q takes defended pawn sacrifice.

The solution of course is progressive pruning. Komodo throws out such moves if they are near leaf nodes, but not near the root.
There are only two types of pruning in the alpha/beta framework. The first is called "backward pruning" and is the basic alpha/beta pruning mechanism. The second is forward pruning, where you generate a set of moves, and then say "these will be searched and these are going to be tossed out." IE the shannon type-B strategy. I currently do backward pruning (obviously) since that is alpha/beta, I also do forward pruning where I physically throw out some moves and never search them at all, not even to reduced depth. And then I do the reductions and extensions stuff as well.

But I believe you are talking about classic forward pruning, which means you prune moves _before_ they are searched, where backward pruning uses results backed up the tree to prune (aka alpha/beta).

For forward pruning, it really doesn't matter whether you throw 'em out near the root or near the tips. Early programs had a tapered search that kept more near the root and pruned more near the tips. In Crafty I believe I am doing the forward pruning stuff in the last 4 plies, but would have to look to be certain, I wrote this code last year, tested it, and moved on to other things. I'm not certain about the details, just that it is there.