matthewlai wrote: ↑Fri Dec 07, 2018 12:49 pm
During training, we do softmax sampling by visit count up to move 30. There is no value cutoff. Temperature is 1.
This is a rather important difference and will explain a lot about Leela Chess Zero's endgame problems.
Thanks for clarifying some of these things. The 0..1 vs -1..1 range thing is a bit funny. I interpreted the paper as 0..1 initially because that's what older MCTS papers used, then people pointed out that the AZ papers work on a -1..1 range and we changed things. And now it turns out the original version was what AZ had after all.
Yes, all values are initialized to loss value.
Were other settings ever considered, notably 0.5 or parent?
My sincere congratulations to the DeepMind team, because after half a century of alpha-beta algorithm their new approach has revolutionized computer chess and created authentic artworks in their games against Stockfish.
Javier Ros
Associate Professor of Applied Mathematics at the University of Seville (Spain).
crem wrote: ↑Fri Dec 07, 2018 1:02 pm
Whether it's -1 to 1 or 0 to 1 is also important to Cpuct scaling (or C(s) in the latest version of the paper). Do c_base and c_init values assume that Q range is -1..1 or 0..1?
Apart from the range, how different is AZ's C(s) from what Lc0 uses?
matthewlai wrote: ↑Fri Dec 07, 2018 12:49 pm
During training, we do softmax sampling by visit count up to move 30. There is no value cutoff. Temperature is 1.
This is a rather important difference and will explain a lot about Leela Chess Zero's endgame problems.
One of Leela's problems is thinking theoretically drawn endgames can be won. This happens because during the training there is an intentional non-zero chance of "blundering" and in such endgames eventually a blunder will cause the side with the advantage to win.
The blundering was implemented for the whole game because the paper says AZ works like that, but it was now clarified this was actually only done during the first 30 moves.
Jouni wrote: ↑Fri Dec 07, 2018 2:00 pm
I only looked so far for TCEC opening games. AO seems to be sometimes like patzer and loses in 22 moves to outdated SF .
Gian-Carlo Pascutto wrote: ↑Fri Dec 07, 2018 8:48 pm
Were other settings ever considered, notably 0.5 or parent?
Yes and 0 seems to work best. Assumption being that most positions have 1 or at most a few good moves. All other moves are akin to passing or worse. In most equal-ish positions, passing will give the opponent a big advantage.
Disclosure: I work for DeepMind on the AlphaZero project, but everything I say here is personal opinion and does not reflect the views of DeepMind / Alphabet.
Gian-Carlo Pascutto wrote: ↑Fri Dec 07, 2018 8:48 pm
Were other settings ever considered, notably 0.5 or parent?
Yes and 0 seems to work best. Assumption being that most positions have 1 or at most a few good moves. All other moves are akin to passing or worse. In most equal-ish positions, passing will give the opponent a big advantage.
I think this ends up explaining why FPU reductions as implemented by both LZ and lc0 work though