Sorry, but this reads like it was written by a news reporter sent in to discover some key bits of language, some blanket generalisations masquerading as truth and then generate some blanket assertions. Sounds good but no cigar. Or not even wrong as they say. Where to begin?Madeleine Birchfield wrote: ↑Thu Oct 08, 2020 8:06 pmBut the quality of Lc0 has nothing to do with the search, and everything to do with the fact that its evaluation function is a massive neural network. Smaller neural networks result in worse performance in Leela and lower quality games, while bigger neural networks result in better performance and higher quality games.
Lc0 still creates a huge search tree every time it moves, and it is well known that monte carlo tree search results in tactical blind spots for the engine, so it is in some regards an inferior search algorithm to alpha-beta. But it could handle large neural networks better than alpha-beta could.
This is nonsense:
So what? Exactly the opposite is shown by NNUE.“The quality of Lc0 has nothing to do with the search“
Facile. “Well known” is no argument. Conclusion, although hedged, doubly so even, by being hedged, is meaningless. Both AB and MCTS have “blind spots”, differing reasons and differing blind spots.“Smaller neural networks result in worse performance in Leela and lower quality games, while bigger neural networks result in better performance and higher quality games.”
WTF are you talking about here?“it is well known that monte carlo tree search results in tactical blind spots for the engine, so it is in some regards an inferior search algorithm to alpha-beta”
“But it could handle large neural networks better than alpha-beta could“