Meaningful ageing of a hash table

Discussion of anything and everything relating to chess playing software and machines.

Moderators: hgm, Rebel, chrisw

peter
Posts: 3186
Joined: Sat Feb 16, 2008 7:38 am
Full name: Peter Martan

Re: Meaningful ageing of a hash table

Post by peter »

Ovyron wrote: Wed Mar 04, 2020 10:14 am If there's a variation that makes the tree explode with so many lines that I can't check them all I rather avoid it, so the question "what is the strongest you can play by keeping you analysis tree small and analyzing as few nodes as possible" is worth answering, because if you need a very large tree to find a move in a position, you probably messed up in a prior move leading up to it.
Question is how big is big and how small is small as for analysis trees.
Boundaries you're bound too come mainly from amount of RAM you can use for hash (NN cache) and how much of your tree will the engine be able to keep in hash for how many moves backward in depth and broadness without losing the points of the many end- positions and the nodes leading to them.

So the boundary of making evals and move- choice at the starting positions better by backward analysing a tree of interest, is the hardware you have and the time for computing and for editing by yourself.

If there hadn't been made any progress in that (meaningful ageing of a hash table, the name of the thread), there wouldn't have been any real progress in engine (or- GUI) development neither, at least by far not as much as there has been to me.

I remember well the times, when I went backward a game with early versions of fritz and Shredder and Crafty, much later on with Rybka and Houdini and komodo then (this latest one has a revival in the meantime again for me, its hash- ageing was something special always, had many emails with Mark Lefler a few years ago about that, up to 13.3 there is finally the storage of hash working with fritz too now, for many versions you could only reload in fritz ,but storage didn't work there, you had to store in Shredder or Arena e.g.) now with many different SF- branches, and with NN- engines, progress that has been made as for that most important criterion is even greater to me than increase in Elo.

Just one more time before I'll leave it for good again, future of engine- chess to me lies in engine- learning, learning by hashing search- trees and keeping most important entries in hash (NN-cache) at way backward, learning by training of NNs, learning by automatic analaysis of pre- selected trees as GUI- features, learning by certain files stored from selected hash-entries, learning by NNUE sooner or later maybe too.

Of course learning by storage in databases is not only result but basis of further learning of engines and humans too, but it's more a result than a basis till now, maybe that could change with algorithms and nets dealing with big data from big databases too.

Learning by unguided selfplay only won't be the end of development or if it yet will be, it will be the end of computer -chess as a sport and a science of its own too. Chess is a game invented by and played by humans, if engine chess stops being useful for humans and their chess- playing, it's becoming useless at all.

Or to say it less dramatically, if there isn't any progress to be seen and judged by users anymore, other than by exploding amounts of eng-eng-games to be played, to at least getting results out of statistical error bars, it would be worth considering to stop investing more and more hardware- time and electricity into echo chamber- "developments" with end in itself- "progress" only and as much hardware- time and electricity for testing and distinguishing the versions at all, worth considering restart of investing more time in better and more useful using of what we have already.
Peter.
User avatar
Ovyron
Posts: 4556
Joined: Tue Jul 03, 2007 4:30 am

Re: Meaningful ageing of a hash table

Post by Ovyron »

peter wrote: Wed Mar 04, 2020 2:17 pm Question is how big is big and how small is small as for analysis trees.
I have answers for those:

1. If your opponent plays a move you didn't expect, and it's stronger than anything you had, your tree is too small, as it should have covered this move.

2. If for this move you've spent twice the time as you spent in your previous move and you still don't have a clear path to follow, your tree is too big, as if you had chosen a different move in a previous position, you'd already found a clear path to follow in some other variation with a smaller tree.

I don't believe in analyzing chess positions in isolation, "to find the truth" without any opponent, because then a superior player would have found a line better than yours, and this means you have it missing in your analysis, so you can't find the chess truth this way.
peter wrote: Wed Mar 04, 2020 2:17 pm Boundaries you're bound too come mainly from amount of RAM you can use for hash
I've never used more than 128MB RAM for engines. Or more precisely, I've never found an advantage using more than this for engines.

If your analysis method is GOOD, then it should hold up even if you reload the engine or clear the hash before analyzing each node. All methods that rely on the engine remembering something will eventually fail, EVEN if the engine has learning (I have private versions of Stockfish 9 and Stockfish 10 with learning that acts as if the transposition table was infinite and they're useless if the analysis method isn't sound.)

I recommend people to purposely restrict themselves to less RAM and less cores and try to find the best moves that they can, faster hardware has made people lazy and rely on poor methods that hide their defects because the engine just reaches high depth and it masks what could be improved.
peter
Posts: 3186
Joined: Sat Feb 16, 2008 7:38 am
Full name: Peter Martan

Re: Meaningful ageing of a hash table

Post by peter »

Ovyron wrote: Wed Mar 04, 2020 11:03 pm I recommend people to purposely restrict themselves to less RAM and less cores and try to find the best moves that they can, faster hardware has made people lazy and rely on poor methods that hide their defects because the engine just reaches high depth and it masks what could be improved.
Even if you are so much better than your engines are, I don't see how better hardware would make you lazy. I'd rather think it should be more motivation and challenge for own ideas and trials seeing better responses coming faster and better ability of engines to get your points faster if your lines are really better then the engines' ones.

I thought we were talking about engine- chess, hash tables, hard- and software and the development of those.
If you were talking about getting better as a human player mainly, I'd yet not be afraid of having better hard- and software as tools helping me play better chess by myself neither, especially chess with usage of engines against opponents using them too, having high experience and success in that special kind of chess and for sure wont restrict themselves to less cores and less RAM.
:)
I'm outa here again, thanks for reading so much of my thoughts and for giving yours. Of course each and every user has his own ways, main thing is to have fun with it.
Peter.
User avatar
Ovyron
Posts: 4556
Joined: Tue Jul 03, 2007 4:30 am

Re: Meaningful ageing of a hash table

Post by Ovyron »

peter wrote: Thu Mar 05, 2020 12:06 am Even if you are so much better than your engines are, I don't see how better hardware would make you lazy.
It's when they suggests moves that one can't improve. Stockfish 8 was the first engine that would do this more often than not, killing the fun because if the unassisted engine already finds the best move then all the time you spend building the tree, extending the lines, looking for alternatives, checking with other engines, etc. is wasted time because you already had the best move!

One becomes lazy because as engines get stronger they find the moves by themselves and human input becomes less and less significant with time. Faster hardware is nothing but having software from the future in slower hardware, so just more positions where the engine outputs the best move and any work done on them is useless, I've tried to guide the game into positions where engines don't output the best moves, but it has backfired (this shouldn't be taken into consideration when fudging the engine's score.)

The laziness doesn't come from wanting to work less, but from needing to work less, so restricting resources helps one discover better analysis methods as engines don't come up with best moves by themselves, and then using those methods with unrestricted hardware improves over what the engine suggests if it doesn't suggests the best. A very useful skill is being able to tell if the move they suggest is the best already, or one can improve it, otherwise any attempt to improve it is futile.
peter wrote: Thu Mar 05, 2020 12:06 amI'm outa here again, thanks for reading so much of my thoughts and for giving yours. Of course each and every user has his own ways, main thing is to have fun with it.
I appreciate being able to have these discussions with someone like you, recently these kinds of discussions on Talkchess have devolved into finding irrelevant fastest mates in positions where a slow mate suffices, or finding the only winning line in tablebase positions where it's prohibited to use them or check them online. Discussing the future development of potential improvements of chess engines, NNs or automated GUI analysis methods was a nice change of pace.