Stockfish dev.| depth=23| 30 positions x 4| average filling per position 64M:
Hash Time
1M: 381s
32M: 348s
1024M: 361s
Komodo 10.3| depth=22| 30 positions x 4| average filling per position 48M:
Hash Time
1M: 388s
32M: 333s
1024M: 356s
One, too much Hash seems to harm. Second, Komodo improves from 1M (too little) to 32M (adequate) by 17%, while Stockfish dev. by 9%.
Stockfish dev.| depth=23| 30 positions x 4| average filling per position 64M:
Hash Time
1M: 381s
32M: 348s
1024M: 361s
Komodo 10.3| depth=22| 30 positions x 4| average filling per position 48M:
Hash Time
1M: 388s
32M: 333s
1024M: 356s
One, too much Hash seems to harm. Second, Komodo improves from 1M (too little) to 32M (adequate) by 17%, while Stockfish dev. by 9%.
In fact, the minimum for these engines is 4MB, so the table looks like that:
Stockfish dev.| depth=23| 30 positions x 4| average filling per position 64M:
Hash Time
4M: 381s
32M: 348s
1024M: 361s
Komodo 10.3| depth=22| 30 positions x 4| average filling per position 48M:
Hash Time
4M: 388s
32M: 333s
1024M: 356s
What does 'average filling per position' mean? The engines seem to benefit less than I would expect in going from 4MB to 32MB. So it could be that the table already ceases to be overloaded before you reach 32MB.
Having unnecessarily large hash tables can hurt raw speed by increasing the number of TLB misses. You should be able to see this in the nps. So it would also be interesting to quote the nodes needed to reach the desired depth.
hgm wrote:What does 'average filling per position' mean?
Having unnecessarily large hash tables can hurt raw speed by increasing the number of TLB misses. You should be able to see this in the nps. So it would also be interesting to quote the nodes needed to reach the desired depth.
I might do this. "Average filling per position" means what maximum Hash is used during fixed-depth search on those positions by each engine. The positions are all openings, and the maximum used Hash to certain depth is similar for them, on average a little above 32M.
So that would be the hash filling in case of asymptotically large table?
It can be expected that there is very little benefit from remembering the leaf nodes of the tree, and the usual replacement schemes would overwrite those first, and try to protect the results from deeper searches. So if the table gets smaller than the number of different positions in the tree, the leaf nodes will start to compete with each other for space, and will only live in the table for a fraction of the time. But the deepernodes will be hardly affected. And still many transpositions (of moves close to the horizon) that reach the leaf nodes will occur within their survival time. And for those that occur afterwards it is not very expensive to do redo the evaluation of single node.
IIRC in my measurements the search time only started to go up once the overload factor exeded 10.
hgm wrote:So that would be the hash filling in case of asymptotically large table?
Yes
It can be expected that there is very little benefit from remembering the leaf nodes of the tree, and the usual replacement schemes would overwrite those first, and try to protect the results from deeper searches. So if the table gets smaller than the number of different positions in the tree, the leaf nodes will start to compete with each other for space, and will only live in the table for a fraction of the time. But the deepernodes will be hardly affected. And still many transpositions (of moves close to the horizon) that reach the leaf nodes will occur within their survival time. And for those that occur afterwards it is not very expensive to do redo the evaluation of single node.
IIRC in my measurements the search time only started to go up once the overload factor exeded 10.
I didn't know that, might be worth testing at a factor 4.