time usage

Discussion of chess software programming and technical issues.

Moderator: Ras

bob
Posts: 20943
Joined: Mon Feb 27, 2006 7:30 pm
Location: Birmingham, AL

time usage

Post by bob »

Here is an old idea, never fully explained or studied, that would make fuel for an interesting discussion.

When to use extra time in the search?

One example from the mid 70's in my program, is the idea of a fail-low forcing you to use more time to avoid a tactical problem. And it works well. But there is another idea that no one has really discussed much from Hsu and Campbell in Deep Blue.

Hsu reported that they would use extra time when the size of the tree could not be expressed as a normal function of the depth. In other words, when the effective branching factor goes to hell in a handbasket, you might want to use more time then, before you commit to a path that then fails low and there is no solution (searching longer after a fail low sometimes finds a move to save the position, but other times, there is no move, you are already in too deep.)

Usually the EBF goes up when move ordering goes bad. And move ordering goes bad because we are now beginning to uncover something in the tree that is making our old calculations fail (best moves in hash are no longer best moves, killer moves are no longer killers, etc) I once asked him about this idea, but we never got very far as we sidetracked onto something else and I forgot to come back to the point. I'm going to fire off an email to Murray to ask, but I thought this idea deserves some discussion, since it would be nice to recognize pre-fail-low conditions and spend more time to find a better move before we commit to a path that is going to fail low later, where extra time will be of no help as it will be too late to avoid the problem.

I'd bet that at some point, in any game that is lost, that there is a time where one could search longer and find a better move before setting out on a lost path. I'm going to analyze a few lost games (longer time controls) to see if I can see any sort of trend that might become an algorithm to recognize this condition earlier than I currently do.

Thoughts???
frankp
Posts: 233
Joined: Sun Mar 12, 2006 3:11 pm

Re: time usage

Post by frankp »

It would seem logical to use time where you have an increased probability of avoiding a losing position. I guess the question is whether on average throughout a game this is achieved by Hsu-method or resolving fail lows. i.e. how much time is wasted on the Hsu-method worrying un-necessarily compared to the benefit - where you were right to be concerned. (Which I guess is linked to theoretical importance of increased EBF.). Looks like a question to be answered or at least informed by testing with a cluster :-)
User avatar
hgm
Posts: 28390
Joined: Fri Mar 10, 2006 10:06 am
Location: Amsterdam
Full name: H G Muller

Re: time usage

Post by hgm »

If I were you I would start measuring how many Elo your current (advanced) time management buys you over a simple-minded finish-the-iteration scheme. And then it would be interesting to know what is the best distribution of time over the game in either scheme. If you have to do 40 moves in time T, is it better to do the first 20 moves of every session in T/2, or 3/4*T, or what? I.e., if you take a simple formula TargetTime = f*TimeLeft/MovesLeft, do you perform best when f=1, of can you improve on it by taking f=1.5 or f=2?

Only when such elementary dependences are known it will be possible to calculate how much the allocation of extra time to one move will hurt you for the rest of the game, and thus if it is worth it or not.
jwes
Posts: 778
Joined: Sat Jul 01, 2006 7:11 am

Re: time usage

Post by jwes »

bob wrote:Here is an old idea, never fully explained or studied, that would make fuel for an interesting discussion.

When to use extra time in the search?

One example from the mid 70's in my program, is the idea of a fail-low forcing you to use more time to avoid a tactical problem. And it works well. But there is another idea that no one has really discussed much from Hsu and Campbell in Deep Blue.

Hsu reported that they would use extra time when the size of the tree could not be expressed as a normal function of the depth. In other words, when the effective branching factor goes to hell in a handbasket, you might want to use more time then, before you commit to a path that then fails low and there is no solution (searching longer after a fail low sometimes finds a move to save the position, but other times, there is no move, you are already in too deep.)

Usually the EBF goes up when move ordering goes bad. And move ordering goes bad because we are now beginning to uncover something in the tree that is making our old calculations fail (best moves in hash are no longer best moves, killer moves are no longer killers, etc) I once asked him about this idea, but we never got very far as we sidetracked onto something else and I forgot to come back to the point. I'm going to fire off an email to Murray to ask, but I thought this idea deserves some discussion, since it would be nice to recognize pre-fail-low conditions and spend more time to find a better move before we commit to a path that is going to fail low later, where extra time will be of no help as it will be too late to avoid the problem.

I'd bet that at some point, in any game that is lost, that there is a time where one could search longer and find a better move before setting out on a lost path. I'm going to analyze a few lost games (longer time controls) to see if I can see any sort of trend that might become an algorithm to recognize this condition earlier than I currently do.

Thoughts???
I was looking into this a while ago, and while I did not come up with anything positive, it did appear that the fail high first percentage did not drop significantly on these large fail lows. It might be worthwhile investigating what causes the EBF to "go to hell". Some other possibilities are:
More moves - e.g. if both sides promote in and endgame.
More extensions/less reductions - e.g. many checks.
Fewer null move cutoffs.
bob
Posts: 20943
Joined: Mon Feb 27, 2006 7:30 pm
Location: Birmingham, AL

Re: time usage

Post by bob »

frankp wrote:It would seem logical to use time where you have an increased probability of avoiding a losing position. I guess the question is whether on average throughout a game this is achieved by Hsu-method or resolving fail lows. i.e. how much time is wasted on the Hsu-method worrying un-necessarily compared to the benefit - where you were right to be concerned. (Which I guess is linked to theoretical importance of increased EBF.). Looks like a question to be answered or at least informed by testing with a cluster :-)
That's the easy part (testing). A more difficult question is "How to determine that things are turning south in the game and more time is justified?" We could use the usual fail-high on first move percentage, we could use the time per iteration divided by time per previous iteration to catch a sudden jump in complexity. Etc. That's the part that needs thought and perhaps ideas.

I can quite easily tell whether an idea is good or bad, since testing is easy. It is coming up with workable ideas that is interesting.
bob
Posts: 20943
Joined: Mon Feb 27, 2006 7:30 pm
Location: Birmingham, AL

Re: time usage

Post by bob »

jwes wrote:
bob wrote:Here is an old idea, never fully explained or studied, that would make fuel for an interesting discussion.

When to use extra time in the search?

One example from the mid 70's in my program, is the idea of a fail-low forcing you to use more time to avoid a tactical problem. And it works well. But there is another idea that no one has really discussed much from Hsu and Campbell in Deep Blue.

Hsu reported that they would use extra time when the size of the tree could not be expressed as a normal function of the depth. In other words, when the effective branching factor goes to hell in a handbasket, you might want to use more time then, before you commit to a path that then fails low and there is no solution (searching longer after a fail low sometimes finds a move to save the position, but other times, there is no move, you are already in too deep.)

Usually the EBF goes up when move ordering goes bad. And move ordering goes bad because we are now beginning to uncover something in the tree that is making our old calculations fail (best moves in hash are no longer best moves, killer moves are no longer killers, etc) I once asked him about this idea, but we never got very far as we sidetracked onto something else and I forgot to come back to the point. I'm going to fire off an email to Murray to ask, but I thought this idea deserves some discussion, since it would be nice to recognize pre-fail-low conditions and spend more time to find a better move before we commit to a path that is going to fail low later, where extra time will be of no help as it will be too late to avoid the problem.

I'd bet that at some point, in any game that is lost, that there is a time where one could search longer and find a better move before setting out on a lost path. I'm going to analyze a few lost games (longer time controls) to see if I can see any sort of trend that might become an algorithm to recognize this condition earlier than I currently do.

Thoughts???
I was looking into this a while ago, and while I did not come up with anything positive, it did appear that the fail high first percentage did not drop significantly on these large fail lows. It might be worthwhile investigating what causes the EBF to "go to hell". Some other possibilities are:
More moves - e.g. if both sides promote in and endgame.
More extensions/less reductions - e.g. many checks.
Fewer null move cutoffs.
Something else I don't measure. PVS does almost exclusively null-window searches. If those start to fail high too frequently, but the opened window fails low, that can drive up the EBF without changing the fh% significantly. Clearly something goes wrong at times.
bob
Posts: 20943
Joined: Mon Feb 27, 2006 7:30 pm
Location: Birmingham, AL

Re: time usage

Post by bob »

hgm wrote:If I were you I would start measuring how many Elo your current (advanced) time management buys you over a simple-minded finish-the-iteration scheme. And then it would be interesting to know what is the best distribution of time over the game in either scheme. If you have to do 40 moves in time T, is it better to do the first 20 moves of every session in T/2, or 3/4*T, or what? I.e., if you take a simple formula TargetTime = f*TimeLeft/MovesLeft, do you perform best when f=1, of can you improve on it by taking f=1.5 or f=2?

Only when such elementary dependences are known it will be possible to calculate how much the allocation of extra time to one move will hurt you for the rest of the game, and thus if it is worth it or not.
Easy enough to do. Obvious question, when to stop doing iterations? When time left is < EBF * time of last iteration? (or in words, when it appears that the time for the next iteration will exceed the time left for this search??? I used to do this _way_ back in the 70's, but went to the partial iteration idea. Probably worthwhile to answer the question about "should you always start another iteration regardless of time remaining, or should you only start it if you have some hope of finishing it?"
mcostalba
Posts: 2684
Joined: Sat Jun 14, 2008 9:17 pm

Re: time usage

Post by mcostalba »

bob wrote: That's the easy part (testing). A more difficult question is "How to determine that things are turning south in the game and more time is justified?"
I don't understand this, you already said :

"Usually the EBF goes up when move ordering goes bad."

So it would seem that you just need to measure the EBF, for instance the ratio between searched nodes in the last iteration and searched node in the previous one.

If it is above a certain thresold it means we have a problem according to this idea.

Am I missing something ?
zamar
Posts: 613
Joined: Sun Jan 18, 2009 7:03 am

Re: time usage

Post by zamar »

One example from the mid 70's in my program, is the idea of a fail-low forcing you to use more time to avoid a tactical problem. And it works well.
Stockfish also uses this idea, but instead of researching with bigger window, it goes directly searching for alternatives for the first move. If aspiration window is small enough, this behaviour should somewhat prevent the EBF explosion, and is a good indicator that we should allocate more time.
bob wrote: Thoughts???
The problem is that if we know that have a very bad EBF, an extra iteration can be very costly. It could take ages just to complete the first move. I'm not saying that this couldn't work, but it's very tricky to decide how much extra time you are willing to spend before giving up.
Joona Kiiski
User avatar
stegemma
Posts: 859
Joined: Mon Aug 10, 2009 10:05 pm
Location: Italy
Full name: Stefano Gemma

Re: time usage

Post by stegemma »

Maybe in those situations it is better just to choose randomly from the best bad moves that we have found, instead to use more time. In a human against human game, if we found that we loose a pawn, for sample, why to loose a lot of time trying to save it? Even the clock is a piece (IMHO) so i think that saving time is better than searching in a "degenerated" tree, hoping to find something not so bad.

Choosing randomly maybe can "surprise" the other program, so that it don't play the move already in pondering... and that will give us another little gain in time.

My question in fact is: if the branching factor is not the ones we expect, do to no good moves found, how can we think to find a better move with this unckown BF?