right handling of "allocated time (over)" during s
Moderators: hgm, Rebel, chrisw
-
- Posts: 234
- Joined: Sat Jan 17, 2015 11:54 pm
right handling of "allocated time (over)" during s
I used to only check for insufficient time in the ID loop , but now that I have a thread for the UCI loop and another for the search I'm going to check for time over during the search , but how should I get out of the search if the time is over , what score should I return "by the I'm using normal alpha beta Fail-Hard in case it makes a difference" ? and can the partial result be used for anything or is it good for nothing ?
-
- Posts: 20943
- Joined: Mon Feb 27, 2006 7:30 pm
- Location: Birmingham, AL
Re: right handling of "allocated time (over)" duri
When time runs out, you generally want to just return, but you should notice everywhere along the way that time has run out and you should not update the best score, best move, PV, or anything. The only partial result you can use if a partial result that is complete. IE you search the first root move completely and get a score, you can use that even if you run out of time before you finish searching the rest of the root moves, assuming you update the best move / PV / score when you finish the root move and not at the end of the search...MahmoudUthman wrote:I used to only check for insufficient time in the ID loop , but now that I have a thread for the UCI loop and another for the search I'm going to check for time over during the search , but how should I get out of the search if the time is over , what score should I return "by the I'm using normal alpha beta Fail-Hard in case it makes a difference" ? and can the partial result be used for anything or is it good for nothing ?
-
- Posts: 4052
- Joined: Thu May 15, 2008 9:57 pm
- Location: Berlin, Germany
- Full name: Sven Schüle
Re: right handling of "allocated time (over)" duri
When detecting timeout (at the top of a node) I set a flag "stopSearch" to true and return a very high positive score so that the parent (who immediately negates the score) is guaranteed not to update his best score. Also in each search function I check in the move loop after unmakeMove() whether stopSearch has been set, and I do this before attempting any action based on the score that was just returned by the subtree. If stopSearch is set I immediately return a very high positive score as well, except for the root node where I only break from the loop (and I also stop the iterative deepening loop if stopSearch is set). Doing so leads to ignoring all incomplete subtrees of the root node. Results for root moves that were searched completely before stopSearch was called are not affected this way.MahmoudUthman wrote:I used to only check for insufficient time in the ID loop , but now that I have a thread for the UCI loop and another for the search I'm going to check for time over during the search , but how should I get out of the search if the time is over , what score should I return "by the I'm using normal alpha beta Fail-Hard in case it makes a difference" ? and can the partial result be used for anything or is it good for nothing ?
-
- Posts: 234
- Joined: Sat Jan 17, 2015 11:54 pm
Re: right handling of "allocated time (over)" duri
about that last part my understanding is that this only works because the best move from the last iteration is the first move to be searched in the next iteration so for example if I have fully searched 6 moves in the new iteration and the time ran out it's okay to compare just those 6 moves and choose the best one because it was proven to be better than the previous best "1st move" , but it may not be the optimal move at the current iteration. is this right ?bob wrote:The only partial result you can use if a partial result that is complete. IE you search the first root move completely and get a score, you can use that even if you run out of time before you finish searching the rest of the root moves, assuming you update the best move / PV / score when you finish the root move and not at the end of the search...
-
- Posts: 234
- Joined: Sat Jan 17, 2015 11:54 pm
Re: right handling of "allocated time (over)" duri
wouldn't this discard a valid result if the unmaked move was the last one , and it's parent was also the last one , and so on and so on back to the root ?Sven Schüle wrote:Also in each search function I check in the move loop after unmakeMove() whether stopSearch has been set, and I do this before attempting any action based on the score that was just returned by the subtree. If stopSearch is set I immediately return a very high positive score as well, except for the root node where I only break from the loop (and I also stop the iterative deepening loop if stopSearch is set). Doing so leads to ignoring all incomplete subtrees of the root node. Results for root moves that were searched completely before stopSearch was called are not affected this way.
-
- Posts: 20943
- Joined: Mon Feb 27, 2006 7:30 pm
- Location: Birmingham, AL
Re: right handling of "allocated time (over)" duri
yep...MahmoudUthman wrote:about that last part my understanding is that this only works because the best move from the last iteration is the first move to be searched in the next iteration so for example if I have fully searched 6 moves in the new iteration and the time ran out it's okay to compare just those 6 moves and choose the best one because it was proven to be better than the previous best "1st move" , but it may not be the optimal move at the current iteration. is this right ?bob wrote:The only partial result you can use if a partial result that is complete. IE you search the first root move completely and get a score, you can use that even if you run out of time before you finish searching the rest of the root moves, assuming you update the best move / PV / score when you finish the root move and not at the end of the search...
-
- Posts: 27791
- Joined: Fri Mar 10, 2006 10:06 am
- Location: Amsterdam
- Full name: H G Muller
Re: right handling of "allocated time (over)" duri
It seems to me this is sufficient, and the test at the top of Search() is somewhat redundant. It doesn't matter much if the engine first goes a few ply deeper before it starts unwinding the search. Even if the timeout hits you in the root, and you just happened to embark on searching a line that went 100 ply deep, that still would be only 100 nodes (i.e. ~ 0.1 msec) before you make the first (and then immediately all) returns. As an engine spends > 99% of its time in QS or d <= 2 nodes, on average you would expect to do only one extra node.Sven Schüle wrote: Also in each search function I check in the move loop after unmakeMove() whether stopSearch has been set, and I do this before attempting any action based on the score that was just returned by the subtree.
Yes, it would. Once in a million years...MahmoudUthman wrote: wouldn't this discard a valid result if the unmaked move was the last one , and it's parent was also the last one , and so on and so on back to the root ?
This is not very damaging anyway. If the timeout hits you at ply 30 you would discard the result of 30 nodes. The nodes that required a significant amount of work will almost certainly be kept in the hash table for the next search, together with all their daughters. Next time you visit them the score of all moves the search of which did complete will be satisfied by an immediate hash hit in the daughter. Only the last daughter also had her result discarded, and isn't in the hash table with sufficient depth. So you search that, and get the same situation: all moves give hash hits, except the last. So the overhead is limited to doing hash probes in all daughters of those 30 nodes, say 1000 hash probes (if they were all PV nodes), of 100ns each.
-
- Posts: 4052
- Joined: Thu May 15, 2008 9:57 pm
- Location: Berlin, Germany
- Full name: Sven Schüle
Re: right handling of "allocated time (over)" duri
It wouldn't, since I only detect timeout at the top of a node, so timeout always means there was at least one node missing. The performance penalty for checking for timeout at the top of each node (but in fact only once every N nodes, for instance N=4096) is neglible.MahmoudUthman wrote:wouldn't this discard a valid result if the unmaked move was the last one , and it's parent was also the last one , and so on and so on back to the root ?Sven Schüle wrote:Also in each search function I check in the move loop after unmakeMove() whether stopSearch has been set, and I do this before attempting any action based on the score that was just returned by the subtree. If stopSearch is set I immediately return a very high positive score as well, except for the root node where I only break from the loop (and I also stop the iterative deepening loop if stopSearch is set). Doing so leads to ignoring all incomplete subtrees of the root node. Results for root moves that were searched completely before stopSearch was called are not affected this way.
In contrast to that, the method proposed by HGM may actually suffer from the problem you have described, and I don't like it since discarding a valid result may occur once every some thousand games when playing ultra-fast games where you have few milliseconds only per search. For instance, at a speed of 1 Mnps and with 2 seconds for 40 moves the average thinking time per move will be slightly less than 50 milliseconds corresponding to 50000 nodes, so I would expect the unwanted situation once every 50000 games. That is not much but I still don't like it.
-
- Posts: 27791
- Joined: Fri Mar 10, 2006 10:06 am
- Location: Amsterdam
- Full name: H G Muller
Re: right handling of "allocated time (over)" duri
You would expect it once every 50,000 moves to happen in a leaf. But it could also happen closer to the root. Say you search 10 ply deep, then there are 10 nodes in a tree from which the search would naturally unwind, so once every 5,000 moves the timeout would strike in one such a node. That is every 5,000 x 50msec = 250 sec.
So every 250 sec you would save 1 to 10 times 30 hash probes of, say 100ns = 15 usec on average. That is 0.000,006%. I understand the overhead of doing the test is small. But I doubt it will be that small. At 1Mnps a node takes 4,000 clock cycles at 4GHz. If the test would take 1/4 cycle, (because the CPU does 4 instructions perclock) that is already 1 part in 16,000, or 0.006,25%. That is about 1,000 times as much as what you hope to save...
Of course this is still all negligible, so time wastage is not really the argument on which to decide. But why complicate the code to slow it down, even if it is only by an unmeasurably small amount?
Don't worry, be happy (that your code is faster)!
So every 250 sec you would save 1 to 10 times 30 hash probes of, say 100ns = 15 usec on average. That is 0.000,006%. I understand the overhead of doing the test is small. But I doubt it will be that small. At 1Mnps a node takes 4,000 clock cycles at 4GHz. If the test would take 1/4 cycle, (because the CPU does 4 instructions perclock) that is already 1 part in 16,000, or 0.006,25%. That is about 1,000 times as much as what you hope to save...
Of course this is still all negligible, so time wastage is not really the argument on which to decide. But why complicate the code to slow it down, even if it is only by an unmeasurably small amount?
Don't worry, be happy (that your code is faster)!
-
- Posts: 4052
- Joined: Thu May 15, 2008 9:57 pm
- Location: Berlin, Germany
- Full name: Sven Schüle
Re: right handling of "allocated time (over)" duri
I am not arguing about performance, and I am not even sure which way is faster, "yours" or "mine". I think we agree that performance is not an issue in this case, in both directions.hgm wrote:You would expect it once every 50,000 moves to happen in a leaf. But it could also happen closer to the root. Say you search 10 ply deep, then there are 10 nodes in a tree from which the search would naturally unwind, so once every 5,000 moves the timeout would strike in one such a node. That is every 5,000 x 50msec = 250 sec.
So every 250 sec you would save 1 to 10 times 30 hash probes of, say 100ns = 15 usec on average. That is 0.000,006%. I understand the overhead of doing the test is small. But I doubt it will be that small. At 1Mnps a node takes 4,000 clock cycles at 4GHz. If the test would take 1/4 cycle, (because the CPU does 4 instructions perclock) that is already 1 part in 16,000, or 0.006,25%. That is about 1,000 times as much as what you hope to save...
Of course this is still all negligible, so time wastage is not really the argument on which to decide. But why complicate the code to slow it down, even if it is only by an unmeasurably small amount?
Don't worry, be happy (that your code is faster)!
My argument was about possibly discarding a valid result, since that was what the OP asked in one of his posts. As mentioned above, at an average amount of 50000 nodes for one search in an ultra-fast game the chance that the timeout occurs after having processed the very last node of the tree (for the current iteration) is at least one out of 50000 for one search. Here I also assume, for simplicity, that almost all nodes belong to the final iteration. Leaves or internal nodes or qsearch nodes does not matter since the check for timeout is not restricted to leaves. What I had missed so far is the average length of a game, say 60 moves, so you may calculate the probability for occurring once per game (something around 1 - ((49999 / 50) ^ 60) = 0.0012 if I remember it correctly). That would be once in about 1000 games for this example. Too much for my taste.