"panic time" and "easy moves"

Discussion of chess software programming and technical issues.

Moderators: hgm, Rebel, chrisw

bob
Posts: 20943
Joined: Mon Feb 27, 2006 7:30 pm
Location: Birmingham, AL

"panic time" and "easy moves"

Post by bob »

I have been working on this problem for quite a while. Still working on it in fact, but I thought I would give a bit of reasoning behind what I am trying to do, to see if there are any other ideas floating around that might be relevant.

(1) easy moves. Pretty obvious what this means. A move that is so obvious that one can safely spend less than the usual target search time on it, without fearing overlooking something.

(2) panic time. I generally call this a "fail low time extension". The idea came to me at a 1970's ACM event where everyone established a fixed time limit before the search started, and then stopped when that time limit was reached (we still did "easy moves" back then, however). In one game, Chess 4.x was playing someone and they had printed their 7-ply search move choice, and started the 8 ply search. David Levy was the TD and pointed out that their 7 ply move had a major flaw they could not see with 7 plies, but could with 8. Slate and Atkins were bouncing around in their chairs hoping the program would find a better move before the time limit expired. Back then, everyone just printed the best move at the end of the iteration, rather than what we all do today, printing a move whenever the best move gets replaced. That set me to thinking, and between that round and the next, I added two things to my program (called Blitz at the time, prior to Cray's involvement). First thing was to display any change to the best move so that I would know, at any point in time, what would be played if the search terminated then. Second was that if a search failed low, i extended the time limit 2x, to see if I could find a better move. This idea was later refined and published in the ICCA.

In 1994 at the last ACM event, several of us were sitting around a table talking about various topics, and Hsu mentioned something that caught my attention, namely that they could enter what he called "panic time" without a fail low. He originally explained it something like "when the size of the tree can not be expressed canonically as a function that matches the typical tree growth pattern, we extend before the fail-low happens because something is making the tree harder to analyze than usual, which means something unusual is going on that might be important."

I asked him about it, but someone else was asking him and myself about parallel search and we got off onto that and never returned. But I never forgot about it.

Several months ago I decided I did not particularly like our ultra-conservative "easy move" approach (move must be a recapture of last piece moved by opponent, material balance must not change, etc). I didn't like it because in one game I had watched, our queen was attacked and had only one square it could retreat to. Move was an "obvious" one to me, but it did not meet our "easy move" criteria and so the usual amount of time was used (and wasted).

So, what am I playing with?

I am currently experimenting with the node-counts for the ply=1 move list (the same counts I use to order the moves after each iteration completes). Some tests showed that the "easy moves" had a pretty recognizable feature, that being that the number of positions searched on the first (easy move) was WAY larger than the sum of the rest of the moves. Typically 90% or better. So that's the root of the idea.

But "hard moves" seem like a logical extension. Where the size of the tree for the first move is a small fraction of the total nodes searched. This happens when a program repeatedly changes its mind and finds multiple "best moves" in a single iteration, or where it alternates between 2-3 moves as the iterations are done.

I have found lots of issues I had to work around, as my node counts were not always right. For example, what if you start iteration 25 and ALMOST finish searching the first move before time runs out. I stop the search, make the first move, and since I completed iteration 24, I restart the search as if I have already completed 22 iterations (removing my move and opponents) which means the first iteration tried will be 23. And the blasted thing gets an instant group of hash hits that distorts the size of the first move. I am working on ideas that improve this, but have not yet reached something I consider "done".

But what I want to end up with is what I am calling, at the moment, a "complexity factor". A CF of zero (0) might be a position that is completely forced, or close to it. A CF of 100 might be a position where there are lots of nearly equal moves that change each iteration.

I'd like to eventually use this to trigger "easy moves" and "hard moves" where I use less time on easy moves, and more time on hard moves, to try to find a better move before the program reaches the point of a "fail low" where it is often too late to salvage the position.

Anyone done any experiments in this area? When I get something that works, it'll be public of course, so I don't expect commercial guys to make suggestions here.

But it seems obvious that this ought to be workable, in some way...

BTW the conversations with Hsu never re-visited this, as I never thought about it when the opportunity came up.
ZirconiumX
Posts: 1334
Joined: Sun Jul 17, 2011 11:14 am

Re: "panic time" and "easy moves"

Post by ZirconiumX »

bob wrote:But what I want to end up with is what I am calling, at the moment, a "complexity factor". A CF of zero (0) might be a position that is completely forced, or close to it. A CF of 100 might be a position where there are lots of nearly equal moves that change each iteration.
I may be being stupid - but what if you compared the node count of the best and second best moves in the movelist?

If you have a very high difference, chances are the first move is an easy move.

If there is low difference (both moves are good), then compare with the third. If the difference between the best and third best is also low, then this is a hard position - extend time.

I'm being arbitrary because tuning will determine the exact values.

Just my two cents.

Matthew:out
Some believe in the almighty dollar.

I believe in the almighty printf statement.
bob
Posts: 20943
Joined: Mon Feb 27, 2006 7:30 pm
Location: Birmingham, AL

Re: "panic time" and "easy moves"

Post by bob »

ZirconiumX wrote:
bob wrote:But what I want to end up with is what I am calling, at the moment, a "complexity factor". A CF of zero (0) might be a position that is completely forced, or close to it. A CF of 100 might be a position where there are lots of nearly equal moves that change each iteration.
I may be being stupid - but what if you compared the node count of the best and second best moves in the movelist?

If you have a very high difference, chances are the first move is an easy move.

If there is low difference (both moves are good), then compare with the third. If the difference between the best and third best is also low, then this is a hard position - extend time.

I'm being arbitrary because tuning will determine the exact values.

Just my two cents.

Matthew:out
That was the basic idea I gave. just a more general "compare 1st move node count to the sum of the rest". Only problem at the moment is getting good node counts. Transposition table is wrecking the node counts frequently, particularly when pondering, but even when not.
User avatar
Evert
Posts: 2929
Joined: Sat Jan 22, 2011 12:42 am
Location: NL

Re: "panic time" and "easy moves"

Post by Evert »

bob wrote:That was the basic idea I gave. just a more general "compare 1st move node count to the sum of the rest". Only problem at the moment is getting good node counts. Transposition table is wrecking the node counts frequently, particularly when pondering, but even when not.
I guess the obvious idea of storing node-counts of the subtree (or at least the order of magnitude; I suspect the exact number is not that relevant) below each move in the transposition table and adding them back to the total node count when the entry causes a cut-off is a deal-breaker because of the extra size it would take up in the transposition table entry?
Edmund
Posts: 670
Joined: Mon Dec 03, 2007 3:01 pm
Location: Barcelona, Spain

Re: "panic time" and "easy moves"

Post by Edmund »

I researched into the idea described by Bob for Glass Chess Engine.

I too stumbled over troubles when confronted with TT-Cutoffs.

The idea by Evert is flawed for the fact that TT-Cutoffs are often from different depths than the current position. So the nodecount in the tables would be useless. (sure, one could do some estimations for different depths, but I didn’t resort to that)

Glass used Cutnode-statistics (IIRC total_nodes / cut_on_first_move_searched) for root move ordering decisions as a measure for complexity (where there are no returned scores as in the case of Multi-PV Search). Move ordering gave an advantage, for time management decisions my testing layout was not able to produce anything significantly better than the original.

See http://chessprogramming.wikispaces.com/ ... +Heuristic for some theory.
AlvaroBegue
Posts: 931
Joined: Tue Mar 09, 2010 3:46 pm
Location: New York
Full name: Álvaro Begué (RuyDos)

Re: "panic time" and "easy moves"

Post by AlvaroBegue »

bob wrote:But what I want to end up with is what I am calling, at the moment, a "complexity factor". A CF of zero (0) might be a position that is completely forced, or close to it. A CF of 100 might be a position where there are lots of nearly equal moves that change each iteration.
Perhaps I don't have anything very practical to contribute here, but I wonder if a bit of abstract thinking might help the discussion.

Imagine you are observing a game between very strong players (say, stronger than you are) and you are trying to predict the next move. Your prediction is not just an expected move, but actually a full probability distribution over the moves available. I propose the entropy (http://en.wikipedia.org/wiki/Entropy_%2 ... _theory%29) of the distribution as a candidate for your "complexity factor".

You can think of entropy as the expected amount of information you will get from the move. This number is 0 if you know for sure what the next move is going to be, and it reaches its maximum of log(n_moves) when all the moves have equal probability. The precise formula is minus the sum over all moves of the probability of the move times its logarithm. If the logarithm is taken in base 2, the result is in bits.

Another interpretation for entropy is how much space the next move is going to take to describe in a database if you use arithmetic encoding to compress it.

[NOTE: I expect most of you will be aware of what entropy is. I included the definition for the benefit of those who aren't, and not to insult anyone.]

Now of course we still need to come up with some method to derive a probability distribution from the state of the search. But at least that part should be straight forward to train using a database, so it's "just" a machine-learning problem.

What do you think?
bob
Posts: 20943
Joined: Mon Feb 27, 2006 7:30 pm
Location: Birmingham, AL

Re: "panic time" and "easy moves"

Post by bob »

Evert wrote:
bob wrote:That was the basic idea I gave. just a more general "compare 1st move node count to the sum of the rest". Only problem at the moment is getting good node counts. Transposition table is wrecking the node counts frequently, particularly when pondering, but even when not.
I guess the obvious idea of storing node-counts of the subtree (or at least the order of magnitude; I suspect the exact number is not that relevant) below each move in the transposition table and adding them back to the total node count when the entry causes a cut-off is a deal-breaker because of the extra size it would take up in the transposition table entry?
I've thought about it. But it would add to the size, although I suspect one could make do with just 32 bits and store nodes / 1024 or something similar...
bob
Posts: 20943
Joined: Mon Feb 27, 2006 7:30 pm
Location: Birmingham, AL

Re: "panic time" and "easy moves"

Post by bob »

AlvaroBegue wrote:
bob wrote:But what I want to end up with is what I am calling, at the moment, a "complexity factor". A CF of zero (0) might be a position that is completely forced, or close to it. A CF of 100 might be a position where there are lots of nearly equal moves that change each iteration.
Perhaps I don't have anything very practical to contribute here, but I wonder if a bit of abstract thinking might help the discussion.

Imagine you are observing a game between very strong players (say, stronger than you are) and you are trying to predict the next move. Your prediction is not just an expected move, but actually a full probability distribution over the moves available. I propose the entropy (http://en.wikipedia.org/wiki/Entropy_%2 ... _theory%29) of the distribution as a candidate for your "complexity factor".

You can think of entropy as the expected amount of information you will get from the move. This number is 0 if you know for sure what the next move is going to be, and it reaches its maximum of log(n_moves) when all the moves have equal probability. The precise formula is minus the sum over all moves of the probability of the move times its logarithm. If the logarithm is taken in base 2, the result is in bits.

Another interpretation for entropy is how much space the next move is going to take to describe in a database if you use arithmetic encoding to compress it.

[NOTE: I expect most of you will be aware of what entropy is. I included the definition for the benefit of those who aren't, and not to insult anyone.]

Now of course we still need to come up with some method to derive a probability distribution from the state of the search. But at least that part should be straight forward to train using a database, so it's "just" a machine-learning problem.

What do you think?
That's sort of what the node count idea is all about. Bad moves generally have very small node counts because they are refuted so easily/quickly. Potential good moves have larger trees.

I'm still studying the issue. The current problem, however, is the node counts that are not accurate when a predicted move is made, unless the search can go at least as deep as the previous search, as that tends to "outrun" the TT entries so that you don't get cutoffs as the draft is not deep enough.

If you have an easy move, you can search a couple of plies deeper than normal, then on the ponder search you get incorrect node counts due to the extra TT hits caused by the deeper previous search.

The idea seems reasonable, but the devil is in the details. Each time I have experimented with this, I ran into various technical issues that prevented any implementation from being better than the simple "easy move" code I used. But it might be that the "hard move" idea will test better and I have not worked on testing that, yet. But soon.
User avatar
lucasart
Posts: 3232
Joined: Mon May 31, 2010 1:29 pm
Full name: lucasart

Re: "panic time" and "easy moves"

Post by lucasart »

bob wrote:Back then, everyone just printed the best move at the end of the iteration, rather than what we all do today, printing a move whenever the best move gets replaced.
What do you mean ?
In DiscoCheck, I update the best move when an iteration is finished (including the fail low/high of aspiration windows). How can update the best move in the middle of an iteration ?
Theory and practice sometimes clash. And when that happens, theory loses. Every single time.
ZirconiumX
Posts: 1334
Joined: Sun Jul 17, 2011 11:14 am

Re: "panic time" and "easy moves"

Post by ZirconiumX »

lucasart wrote:
bob wrote:Back then, everyone just printed the best move at the end of the iteration, rather than what we all do today, printing a move whenever the best move gets replaced.
What do you mean ?
In DiscoCheck, I update the best move when an iteration is finished (including the fail low/high of aspiration windows). How can update the best move in the middle of an iteration ?
He means an entire iteration of all the moves at the root node, not a call to the search.

Matthew:out
Some believe in the almighty dollar.

I believe in the almighty printf statement.