I agree that data like this can be productive butSergei S. Markoff wrote:Uri, I think we can do some research. At first we need to analyse, how often and under which conditions null move goes bad. To accomplish that we need sample data.
To do it you should produce sampling verision of SF with such modifications:
1. At random node (let say if rand() % 10000 == 0) with null search instead of pruning node do verification search with full depth and then store result in log file.
2. Log file should be plain csv with (I suppose) such list of columns:
— side to move;
— static eval;
— material signature;
— null-move reduction (plies);
— number of white passers;
— number of black passers;
— number of legal non-losing captures for white;
— number of legal non-losing captures for black;
— number of legal quiet moves with see >= 0 for white;
— number of legal quiet moves with see >= 0 for black;
— number of nodes searched inside null-move subtree;
— number of nodes searched inside full verification search;
— fail flag (1 — for the case of verification search returned < beta, otherwise — 0).
I think that's enogh for the beginning. Let's collect sample of 100 000 positions (for the beginning) and I will perform clustering analysis for this set and let's see — if there are any significant cluster for which it's a good idea to disable null move/add verification search.
I do not plan to write a code to generate all this data.