"panic time" and "easy moves"

Discussion of chess software programming and technical issues.

Moderators: hgm, Rebel, chrisw

Uri Blass
Posts: 10314
Joined: Thu Mar 09, 2006 12:37 am
Location: Tel-Aviv Israel

Re: "panic time" and "easy moves"

Post by Uri Blass »

I suggest that in order to detect easy moves or hard moves you should have a function to evaluate the probability of the program to change its mind if you continue the search for another iteration.

I guess that it is usually probability of 10-20% but if you can detect cases when the probability is less than 5% then it is a good candidate to play the move faster and if you can detect cases when the probability is more than 30% then it is a good candidate to do another iteration.

I suggest that you start with 100,000 searches from position
P(i) that you searched to depth d(i) in games when you do not have a clearly winning score for one side in depth d(i)-1.

I think that a rule that you use only in 1% of the cases have almost no effect on rating in the best case so we need rules that we use at least in 5% of the cases(or at least in 5,000 searches).

Possible candidate for easy moves based on the search to depth d(i)-1:
1)You used more than 80% of the time for the first move
2)You did not change your mind in the previous 4 iterations.
3)There was no significant fail low in the last 4 iterations.

questions:
1)How many searches are candidates to be an easy move based on this rule?
2)In how many candidates you changed your mind in depth d(i)?

If
1 is more than 5000 and 2 is less than 5% of 1 then we have a good candidate.


Possible candidates for hard moves can be:
1)You used less than 50% of the time for the first move
2)You changed your mind in at least one of the last 2 iteration or you had a significant fail low in one of the last 2 iteration.

questions:
1)How many searches are candidates to be a hard move based on this rule?
2)In how many candidates you changed your mind in depth d(i)?

If
1 is more than 5000 and 2 is more than 30% of 1 then we have a good candidate for hard move.
bob
Posts: 20943
Joined: Mon Feb 27, 2006 7:30 pm
Location: Birmingham, AL

Re: "panic time" and "easy moves"

Post by bob »

Uri Blass wrote:I suggest that in order to detect easy moves or hard moves you should have a function to evaluate the probability of the program to change its mind if you continue the search for another iteration.

I guess that it is usually probability of 10-20% but if you can detect cases when the probability is less than 5% then it is a good candidate to play the move faster and if you can detect cases when the probability is more than 30% then it is a good candidate to do another iteration.

I suggest that you start with 100,000 searches from position
P(i) that you searched to depth d(i) in games when you do not have a clearly winning score for one side in depth d(i)-1.

I think that a rule that you use only in 1% of the cases have almost no effect on rating in the best case so we need rules that we use at least in 5% of the cases(or at least in 5,000 searches).

Possible candidate for easy moves based on the search to depth d(i)-1:
1)You used more than 80% of the time for the first move
2)You did not change your mind in the previous 4 iterations.
3)There was no significant fail low in the last 4 iterations.

questions:
1)How many searches are candidates to be an easy move based on this rule?
2)In how many candidates you changed your mind in depth d(i)?

If
1 is more than 5000 and 2 is less than 5% of 1 then we have a good candidate.


Possible candidates for hard moves can be:
1)You used less than 50% of the time for the first move
2)You changed your mind in at least one of the last 2 iteration or you had a significant fail low in one of the last 2 iteration.

questions:
1)How many searches are candidates to be a hard move based on this rule?
2)In how many candidates you changed your mind in depth d(i)?

If
1 is more than 5000 and 2 is more than 30% of 1 then we have a good candidate for hard move.
There's nothing you have suggested that I have not already tried/tested. Node counts (effort expended) is just one thing. How many times does the program change its mind during a single iteration? How many successive iterations does the program stick with the same move? Is the score trending up, down, or neutral? But to do something useful, it does have to influence a significant number of moves. Not having a lot of luck over just not doing easy moves at all. But I am, simultaneously, working on recognizing "hard moves" which may well pay off once I get off the "easy" branch of the project... Spending less time might not be nearly so important as spending more time when needed, and not just after a fail low.
bob
Posts: 20943
Joined: Mon Feb 27, 2006 7:30 pm
Location: Birmingham, AL

Re: "panic time" and "easy moves"

Post by bob »

Edmund wrote:I researched into the idea described by Bob for Glass Chess Engine.

I too stumbled over troubles when confronted with TT-Cutoffs.

The idea by Evert is flawed for the fact that TT-Cutoffs are often from different depths than the current position. So the nodecount in the tables would be useless. (sure, one could do some estimations for different depths, but I didn’t resort to that)

Glass used Cutnode-statistics (IIRC total_nodes / cut_on_first_move_searched) for root move ordering decisions as a measure for complexity (where there are no returned scores as in the case of Multi-PV Search). Move ordering gave an advantage, for time management decisions my testing layout was not able to produce anything significantly better than the original.

See http://chessprogramming.wikispaces.com/ ... +Heuristic for some theory.
Let me make sure I understand. For each root move, you add 1 to a counter if it was refuted by the first ply=2 move you searched. Based on this number, divided (either into or by doesn't really matter, you just adjust whether you want large or small values) into the total number of root moves?

A value of "1" would say "very good ordering, likely to be easy? A value larger than one suggests it is not so easy? I'll give it some thought, as it is similar to one of many ideas I have been playing with already. I've fiddled with node counts, number of times the root move changed, etc. I'm somewhat suspicious that node counts will be proportional to your number, since smaller trees (easier to search) imply better move ordering with lots of quick cutoffs, and vice-versa...

I've about decided that raw node counts are problematic since the values are far more inaccurate than I had originally thought... They might be something to "sway" the final evaluation of easy/normal/hard, but it doesn't look like they can be the primary determinant.
Edmund
Posts: 670
Joined: Mon Dec 03, 2007 3:01 pm
Location: Barcelona, Spain

Re: "panic time" and "easy moves"

Post by Edmund »

bob wrote:
Edmund wrote:I researched into the idea described by Bob for Glass Chess Engine.

I too stumbled over troubles when confronted with TT-Cutoffs.

The idea by Evert is flawed for the fact that TT-Cutoffs are often from different depths than the current position. So the nodecount in the tables would be useless. (sure, one could do some estimations for different depths, but I didn’t resort to that)

Glass used Cutnode-statistics (IIRC total_nodes / cut_on_first_move_searched) for root move ordering decisions as a measure for complexity (where there are no returned scores as in the case of Multi-PV Search). Move ordering gave an advantage, for time management decisions my testing layout was not able to produce anything significantly better than the original.

See http://chessprogramming.wikispaces.com/ ... +Heuristic for some theory.
Let me make sure I understand. For each root move, you add 1 to a counter if it was refuted by the first ply=2 move you searched. Based on this number, divided (either into or by doesn't really matter, you just adjust whether you want large or small values) into the total number of root moves?

A value of "1" would say "very good ordering, likely to be easy? A value larger than one suggests it is not so easy? I'll give it some thought, as it is similar to one of many ideas I have been playing with already. I've fiddled with node counts, number of times the root move changed, etc. I'm somewhat suspicious that node counts will be proportional to your number, since smaller trees (easier to search) imply better move ordering with lots of quick cutoffs, and vice-versa...
No, not quite.
Let me rephrase my idea.

For the sub-tree of each root-move calculate in how many percent of nodes a cutoff occurred on the first move.
Assumption: lower First-move-cut-ratio indicates a higher chance to become best move in the future.

As such, adjust the remaining time by a factor depending on the min First-move-cut-ratio of all the non-bestmoves.
bob
Posts: 20943
Joined: Mon Feb 27, 2006 7:30 pm
Location: Birmingham, AL

Re: "panic time" and "easy moves"

Post by bob »

Edmund wrote:
bob wrote:
Edmund wrote:I researched into the idea described by Bob for Glass Chess Engine.

I too stumbled over troubles when confronted with TT-Cutoffs.

The idea by Evert is flawed for the fact that TT-Cutoffs are often from different depths than the current position. So the nodecount in the tables would be useless. (sure, one could do some estimations for different depths, but I didn’t resort to that)

Glass used Cutnode-statistics (IIRC total_nodes / cut_on_first_move_searched) for root move ordering decisions as a measure for complexity (where there are no returned scores as in the case of Multi-PV Search). Move ordering gave an advantage, for time management decisions my testing layout was not able to produce anything significantly better than the original.

See http://chessprogramming.wikispaces.com/ ... +Heuristic for some theory.
Let me make sure I understand. For each root move, you add 1 to a counter if it was refuted by the first ply=2 move you searched. Based on this number, divided (either into or by doesn't really matter, you just adjust whether you want large or small values) into the total number of root moves?

A value of "1" would say "very good ordering, likely to be easy? A value larger than one suggests it is not so easy? I'll give it some thought, as it is similar to one of many ideas I have been playing with already. I've fiddled with node counts, number of times the root move changed, etc. I'm somewhat suspicious that node counts will be proportional to your number, since smaller trees (easier to search) imply better move ordering with lots of quick cutoffs, and vice-versa...
No, not quite.
Let me rephrase my idea.

For the sub-tree of each root-move calculate in how many percent of nodes a cutoff occurred on the first move.
Assumption: lower First-move-cut-ratio indicates a higher chance to become best move in the future.

As such, adjust the remaining time by a factor depending on the min First-move-cut-ratio of all the non-bestmoves.
OK, so basically my current "fail high %" number, presently total number of fail highs found, divided into the total number of fail highs on first move found. But now we compute that for each root move individually?

I can certainly measure that and display it during games to see what it looks like. Won't be very hard to do.
bob
Posts: 20943
Joined: Mon Feb 27, 2006 7:30 pm
Location: Birmingham, AL

Re: "panic time" and "easy moves"

Post by bob »

bob wrote:
Edmund wrote:
bob wrote:
Edmund wrote:I researched into the idea described by Bob for Glass Chess Engine.

I too stumbled over troubles when confronted with TT-Cutoffs.

The idea by Evert is flawed for the fact that TT-Cutoffs are often from different depths than the current position. So the nodecount in the tables would be useless. (sure, one could do some estimations for different depths, but I didn’t resort to that)

Glass used Cutnode-statistics (IIRC total_nodes / cut_on_first_move_searched) for root move ordering decisions as a measure for complexity (where there are no returned scores as in the case of Multi-PV Search). Move ordering gave an advantage, for time management decisions my testing layout was not able to produce anything significantly better than the original.

See http://chessprogramming.wikispaces.com/ ... +Heuristic for some theory.
Let me make sure I understand. For each root move, you add 1 to a counter if it was refuted by the first ply=2 move you searched. Based on this number, divided (either into or by doesn't really matter, you just adjust whether you want large or small values) into the total number of root moves?

A value of "1" would say "very good ordering, likely to be easy? A value larger than one suggests it is not so easy? I'll give it some thought, as it is similar to one of many ideas I have been playing with already. I've fiddled with node counts, number of times the root move changed, etc. I'm somewhat suspicious that node counts will be proportional to your number, since smaller trees (easier to search) imply better move ordering with lots of quick cutoffs, and vice-versa...
No, not quite.
Let me rephrase my idea.

For the sub-tree of each root-move calculate in how many percent of nodes a cutoff occurred on the first move.
Assumption: lower First-move-cut-ratio indicates a higher chance to become best move in the future.

As such, adjust the remaining time by a factor depending on the min First-move-cut-ratio of all the non-bestmoves.
OK, so basically my current "fail high %" number, presently total number of fail highs found, divided into the total number of fail highs on first move found. But now we compute that for each root move individually?

I can certainly measure that and display it during games to see what it looks like. Won't be very hard to do.
Still looking at these issues. One new idea on the topic is that I am experimenting with counting the number of fail highs per root branch, but for the fail highs on the non-first moves, I am adding in depth*depth*position, rather than just one. This makes bad ordering closer to the root count more. As the non-first-counter / fail-highs ratio climbs, move ordering is going wrong. And it seems to reflect the closeness to the root a little better. Still lots of analysis to do...

BTW, the "position" above is simply the position in the move list of the move that failed high. Not sure that is useful yet, I'm working on printing out a lot of info as the search runs, and will then play it on ICC for a while and look at the numbers. I want SOMETHING that is correlated to how easy or how hard a move is to search.
Uri Blass
Posts: 10314
Joined: Thu Mar 09, 2006 12:37 am
Location: Tel-Aviv Israel

Re: "panic time" and "easy moves"

Post by Uri Blass »

bob wrote:
Uri Blass wrote:I suggest that in order to detect easy moves or hard moves you should have a function to evaluate the probability of the program to change its mind if you continue the search for another iteration.

I guess that it is usually probability of 10-20% but if you can detect cases when the probability is less than 5% then it is a good candidate to play the move faster and if you can detect cases when the probability is more than 30% then it is a good candidate to do another iteration.

I suggest that you start with 100,000 searches from position
P(i) that you searched to depth d(i) in games when you do not have a clearly winning score for one side in depth d(i)-1.

I think that a rule that you use only in 1% of the cases have almost no effect on rating in the best case so we need rules that we use at least in 5% of the cases(or at least in 5,000 searches).

Possible candidate for easy moves based on the search to depth d(i)-1:
1)You used more than 80% of the time for the first move
2)You did not change your mind in the previous 4 iterations.
3)There was no significant fail low in the last 4 iterations.

questions:
1)How many searches are candidates to be an easy move based on this rule?
2)In how many candidates you changed your mind in depth d(i)?

If
1 is more than 5000 and 2 is less than 5% of 1 then we have a good candidate.


Possible candidates for hard moves can be:
1)You used less than 50% of the time for the first move
2)You changed your mind in at least one of the last 2 iteration or you had a significant fail low in one of the last 2 iteration.

questions:
1)How many searches are candidates to be a hard move based on this rule?
2)In how many candidates you changed your mind in depth d(i)?

If
1 is more than 5000 and 2 is more than 30% of 1 then we have a good candidate for hard move.
There's nothing you have suggested that I have not already tried/tested. Node counts (effort expended) is just one thing. How many times does the program change its mind during a single iteration? How many successive iterations does the program stick with the same move? Is the score trending up, down, or neutral? But to do something useful, it does have to influence a significant number of moves. Not having a lot of luck over just not doing easy moves at all. But I am, simultaneously, working on recognizing "hard moves" which may well pay off once I get off the "easy" branch of the project... Spending less time might not be nearly so important as spending more time when needed, and not just after a fail low.
I did not think that I suggest something new about the definition of easy and hard move but only about some way to test after having a definition before you test it in games.

My suggestion is to test the idea first not based on results of games.

My point is that you need the following conditions before testing in games(without them testing is probably a waste of time because you probably get less than 5 elo improvement).

For easy moves:
1)At least 5% of your moves should be easy moves
2)The probability that you change your mind for an easy move in the next iteration is at least twice smaller than the probability that you change your mind for a different move

For hard moves
1)At least 5% of your moves should be hard moves
2)The probability that you change your mind for a hard move in the next iteration is at least twice bigger than the probability that you change your mind for a different move.