Koistinen wrote: ↑Mon Sep 25, 2023 10:23 pm
syzygy wrote: ↑Mon Sep 25, 2023 9:09 pm
Koistinen wrote: ↑Mon Sep 25, 2023 8:49 pm
syzygy wrote: ↑Fri Sep 22, 2023 8:37 pm
Koistinen wrote: ↑Wed Sep 20, 2023 8:03 pmsyzygy wrote: ↑Tue Sep 19, 2023 11:15 pm
Also it takes a LOT of time to perform a "short" 10-ply search for every position of a 7- or 8-piece table.
You would only need to do the search individually when you want the value for a specific position.
I am talking about the generation phase.
That is what makes it fun to do. Seems impossible at first but isn't if you look at it from the correct perspective.
???
The point is that spending 1ms on each position during generation of a pawnless 8-piece TB results in a generation time of 648 years. This can be shortened to a bit more than 10 years on a 64-core machine, but that is still a long time. And there are many tables to compute.
10-ply search would not be practical if the leaf nodes take 1ms to evaluate.
A full 10-ply search (without the typical reductions that engines like SF perform) will easily take more than 1ms, but perhaps in positions with 8 pieces or less it is not too bad. But my point is just that, for a realistic tablebase generator that can be used to generate 7- or 8-piece tables, you really cannot afford to spend much time on "predicting" best moves or win/draw/loss, etc., because it has to be done for each and every position.
Also, you don't want to have to repeat this exercise when decompressing blocks of the compressed table. I think this can be solved by using the predictor to rank moves or evaluations and then compress the ranks of the correct moves or evaluations.
So for a WDL table you would need a predictor that converts a position into one of WDL/WLD/DWL/DLW/LWD/LDW, with the first letter giving the most likely correct evaluation, the second letter the second-most likely correct evaluation, and the last letter the least likely correct evaluation. Then you store 0 if the most likely correct evaluation is correct, and 1 or 2 otherwise.
If instead you try to be smarted and calculate the probabiltiies of W,D and L, and then use arithmetic compression to compress them, you have to redo that whole exercise for each position in a compressed block when decompressing the block.
For this I am assuming you are thinking the evaluation function is much quicker but a 10-ply search takes 1ms.
The solution is to use the same algorithm as for generating tablebases in the first place:
1. Compute the evaluation function for each position and store the result.
2. For each ply depth 1..10, compute and store the new result using the previous result so you only need to search 1 ply deep each iteration.
This does not look like a very practical algorithm.