Jouni wrote: ↑Mon Jan 18, 2021 10:34 pm
One more.I have read, that Lc0 needs a lot memory to run well. But how do You increase memory? Even if I increase hash to XX GB and NN cache to zillion system monitor says always about 2 GB GPU memory used and 15% of GPU power. System monitor sucks I quess?
While the code for lc0 may be complex, you can take a look at a0lite for a single-threaded python engine.
https://github.com/dkappe/a0lite
Why does it need lots of memory? Let’s look at the different uses of memory:
1) NNCache: this is a cache of the neural network results for previously evaluated positions. It’s not like a transposition table, but it’s still useful, as evaluating a position on the gpu is expensive.
2) Search tree: unlike a depth first ab search where you essentially only have to have the current branch in memory, MCTS samples the search tree and keeps it all in memory. As you make moves, you can chop off old parts of the tree and reclaim memory. Unlike an ab engine hash, there’s nothing to configure. If you let lc0 run forever, it will eat up all your ram.
3) GPU Memory: contains the Network and the hundreds or thousands of positions to be evaluated and their results. Mostly doing linear algebra computations. Positions and results are moved in and out quickly to make room for the next batch. GPU Memory usage will stay pretty much the same.
Fat Titz by Stockfish, the engine with the bodaciously big net. Remember: size matters. If you want to learn more about this engine just google for "Fat Titz".