Re: Using Freeware AI and Dynamically Generated Endgame Tablebases
Posted: Sat Mar 28, 2020 5:15 pm
Thanks to Dann Corbit, H.G.Muller, and noobpwnftw for your responses.
Part of the challenge for me is restricting the software to freeware. Therefore, Freezer Chess, which costs $80, is not an option. I will definitely examine FinalGen.
The remainder of this response is intended to provoke.
Based on the tablebase-oriented responses that this query has received, it doesn't seem that there is general interest in using Leela Chess Zero (rather than a different engine) to discover the truth about a position. This surprises me. However, it might be explained by noting that this query was posted to the "Programming and Technical Discussions" sub-forum rather than the "General Topics" sub-forum.
Anyway, suppose that dynamic (i.e. on-the-fly) tablebases are excluded, and Leela is restricted to standard 6 man tablebases.
This supposedly means, in effect, that the strategy is restricted to one iteration only. As someone totally ignorant of the power of AI in this scenario, I would have thought that having Leela play 1000 games against itself on a specific position, learning as it goes, would uncover insights generally unavailable to both other engines and (at first sight anyway) humans. Naturally, it would take experience to determine a reasonable compromise between the number of games to be played and the time controls for each game. One option is to double the time controls only for every 10th game.
I said supposedly because suppose that the AI-learning log of [result + moves for each game] is used to programmatically build a database tree. Then you might be able to identify (at a glance) when Leela "learned something about the position" and (for example) the white side started to consistently win rather than draw.
You could then start a second iteration that begins at the critical position rather than the original position. Hopefully, during the second iteration, Leela would have access to all that it had learned in the first iteration. Suppose that during this second iteration a point was reached where (for example) the white side started winning over 98% of the games, and where each white [draw or loss] was followed by many consecutive wins for white.
Hopefully, you could then instruct Leela (both the white and black sides) to consider the critical position as an automatic win, thus "somewhat" replacing the functionality of the dynamic tablebases. You could then start a third iteration where Leela starts from the original position, with no learning, (i.e. starts from scratch) except for the knowledge that (for example) the critical position is a win.
If this is viable, then it seems as if the power of AI-learning without the use of dynamic tablebases has been significantly unlocked. I am naively wondering whether chess grandmasters are underestimating the power of AI-learning over standard chess engines, especially with respect to searching for opening theory novelties. Alternatively, perhaps all chess-AI-learning worshippers are keeping quiet.
I don't see any reason why you couldn't select a position that results after (for example) 10 moves have been played by each side and use that position as the initial starting position. Then you could have Leela play itself from this position for 1000 games (? or more ?).
As described above, you might find a critical subsequent position by manually using the AI-learning log of [result + moves for each game] to programmatically build the database tree. This might allow you to be prepared for any of your opponent's plausible responses.
I am also wondering if AI-learning represents the future of chess grandmasters' search for theoretical novelties.
Part of the challenge for me is restricting the software to freeware. Therefore, Freezer Chess, which costs $80, is not an option. I will definitely examine FinalGen.
The remainder of this response is intended to provoke.
Based on the tablebase-oriented responses that this query has received, it doesn't seem that there is general interest in using Leela Chess Zero (rather than a different engine) to discover the truth about a position. This surprises me. However, it might be explained by noting that this query was posted to the "Programming and Technical Discussions" sub-forum rather than the "General Topics" sub-forum.
Anyway, suppose that dynamic (i.e. on-the-fly) tablebases are excluded, and Leela is restricted to standard 6 man tablebases.
This supposedly means, in effect, that the strategy is restricted to one iteration only. As someone totally ignorant of the power of AI in this scenario, I would have thought that having Leela play 1000 games against itself on a specific position, learning as it goes, would uncover insights generally unavailable to both other engines and (at first sight anyway) humans. Naturally, it would take experience to determine a reasonable compromise between the number of games to be played and the time controls for each game. One option is to double the time controls only for every 10th game.
I said supposedly because suppose that the AI-learning log of [result + moves for each game] is used to programmatically build a database tree. Then you might be able to identify (at a glance) when Leela "learned something about the position" and (for example) the white side started to consistently win rather than draw.
You could then start a second iteration that begins at the critical position rather than the original position. Hopefully, during the second iteration, Leela would have access to all that it had learned in the first iteration. Suppose that during this second iteration a point was reached where (for example) the white side started winning over 98% of the games, and where each white [draw or loss] was followed by many consecutive wins for white.
Hopefully, you could then instruct Leela (both the white and black sides) to consider the critical position as an automatic win, thus "somewhat" replacing the functionality of the dynamic tablebases. You could then start a third iteration where Leela starts from the original position, with no learning, (i.e. starts from scratch) except for the knowledge that (for example) the critical position is a win.
If this is viable, then it seems as if the power of AI-learning without the use of dynamic tablebases has been significantly unlocked. I am naively wondering whether chess grandmasters are underestimating the power of AI-learning over standard chess engines, especially with respect to searching for opening theory novelties. Alternatively, perhaps all chess-AI-learning worshippers are keeping quiet.
I don't see any reason why you couldn't select a position that results after (for example) 10 moves have been played by each side and use that position as the initial starting position. Then you could have Leela play itself from this position for 1000 games (? or more ?).
As described above, you might find a critical subsequent position by manually using the AI-learning log of [result + moves for each game] to programmatically build the database tree. This might allow you to be prepared for any of your opponent's plausible responses.
I am also wondering if AI-learning represents the future of chess grandmasters' search for theoretical novelties.