Because RomiChess has a very cheap evaluation function, calling the qsearch at internal nodes does not slow down the search more than a few percent. Also the evaluation is gotten for every node.
However, unless the qscore can be made good use of, it is a waste of time to retrieve it for internal nodes.
So far, I've not had any real success using this extra information.
Does anyone have any ideas ?
If you are on a sidewalk and the covid goes beep beep
Just step aside or you might have a bit of heat
Covid covid runs through the town all day
Can the people ever change their ways
Sherwin the covid's after you
Sherwin if it catches you you're through
I am contemplating doing even a d=1 search with an enlarged or shifted window in every alpha node as soon as the depth reaches the threshold for NMR. (so that would mean every node below it is re-searched at QS level.) This to extract moves that seem to win material or improve position (and exempt those from reduction), even if it is not enough to beat alpha, and identintify moves that blunder away more material (and perhaps reduce those more).
What is so far stopping me is my fear that this will erase valuable information from the hash table, that already contains many deeper searches for nodes in the QS tree, but with wrong bounds, so they will be searched again at d=0 and overwritten. This could be solved by not storing QS nodes in the hash (but this seems to slow Joker down). I could of course alter the hash key together with opening the window, so that searches with the different windows for the same position map to different entries. Or have a different hash table for such 'pilot' QS nodes altogether (a 'QS cache' similar to an eval cache). For perfting storing the leave-node counts in the main hash is counter-productive, but storing them in a separate hash table small enough to fit in L2 (sa 256KB) did cause a significant speedup.
Michael Sherwin wrote:Because RomiChess has a very cheap evaluation function, calling the qsearch at internal nodes does not slow down the search more than a few percent. Also the evaluation is gotten for every node.
However, unless the qscore can be made good use of, it is a waste of time to retrieve it for internal nodes.
So far, I've not had any real success using this extra information.
Does anyone have any ideas ?
Is the qscore based only on captures and queen promotions or also on checks in the first plies?
Michael Sherwin wrote:Because RomiChess has a very cheap evaluation function, calling the qsearch at internal nodes does not slow down the search more than a few percent. Also the evaluation is gotten for every node.
However, unless the qscore can be made good use of, it is a waste of time to retrieve it for internal nodes.
So far, I've not had any real success using this extra information.
Does anyone have any ideas ?
Is the qscore based only on captures and queen promotions or also on checks in the first plies?
Uri
Just captures and promotions.
If you are on a sidewalk and the covid goes beep beep
Just step aside or you might have a bit of heat
Covid covid runs through the town all day
Can the people ever change their ways
Sherwin the covid's after you
Sherwin if it catches you you're through
hgm wrote:I am contemplating doing even a d=1 search with an enlarged or shifted window in every alpha node as soon as the depth reaches the threshold for NMR. (so that would mean every node below it is re-searched at QS level.) This to extract moves that seem to win material or improve position (and exempt those from reduction), even if it is not enough to beat alpha, and identintify moves that blunder away more material (and perhaps reduce those more).
What is so far stopping me is my fear that this will erase valuable information from the hash table, that already contains many deeper searches for nodes in the QS tree, but with wrong bounds, so they will be searched again at d=0 and overwritten. This could be solved by not storing QS nodes in the hash (but this seems to slow Joker down). I could of course alter the hash key together with opening the window, so that searches with the different windows for the same position map to different entries. Or have a different hash table for such 'pilot' QS nodes altogether (a 'QS cache' similar to an eval cache). For perfting storing the leave-node counts in the main hash is counter-productive, but storing them in a separate hash table small enough to fit in L2 (sa 256KB) did cause a significant speedup.
if depth > 3
for each remaining move
MakeMove();
node->score = -Search(-beta + 100, -alpha + 100, depth - (iDepth / 4 + 3), extendBy);
TakeBack();
Then in LMR, any node->score >= alpha, is not reduced.
This is old news though as I have posted about this in the past. And it only seems to work for my program as no one else has reported any improvement with it.
If you are on a sidewalk and the covid goes beep beep
Just step aside or you might have a bit of heat
Covid covid runs through the town all day
Can the people ever change their ways
Sherwin the covid's after you
Sherwin if it catches you you're through
I open up the window by one pawn on each side to get a more accurate score for each remaining move for move ordering.
So, I do mean, if node->score >= alpha there is no LMR reduction for the move.
note: alpha is not updated in this loop.
If you are on a sidewalk and the covid goes beep beep
Just step aside or you might have a bit of heat
Covid covid runs through the town all day
Can the people ever change their ways
Sherwin the covid's after you
Sherwin if it catches you you're through
I thought about using quiesearch instead of SEE for move ordering, but I do not agree with the use of wide search window. IMHO zero-window with bound (alpha - small delta) is better. If quiesearch discover a score above alpha, we immediately research current move deeper, without even generating later moves. It is possible to use search subtree size (like commonly used for root move ordering) to sort "bad" moves, but I feel that order of moves below alpha is not important.
I still have no complete program, so did not do any tests.