Too much Hash harms?

Discussion of anything and everything relating to chess playing software and machines.

Moderators: hgm, Rebel, chrisw

Zenmastur
Posts: 919
Joined: Sat May 31, 2014 8:28 am

Re: Too much Hash harms?

Post by Zenmastur »

syzygy wrote:
Dirt wrote:
syzygy wrote:The only reason why bigger hash might not gain is that hash is already big enough (for a given test).
No. For an extreme example, hash bigger than main memory will probably hurt.
You removed my statement that it is the nps drop that hurts.

Hash bigger than main memory will lead to a very substantial nps drop and hurts for that reason and only for that reason.

Again: correct for nps drop by looking only at total nodes searched, and any disadvantage of bigger hash disappears. The one and only disadvantage of bigger hash is the nps drop.

Enabling large pages (i.e. make proper use of the hardware) significantly reduces the nps drop.
This is absolutely correct.

I would add that what most of these tests are measuring is hardware specific. Change the hardware and the end results of the test changes, sometimes dramatically. i.e. The measurement(s) is(are) NOT solely a function of the software.

HGM is correct that the leaf nodes are almost worthless. A huge fraction of them will never be seen again except on the next iteration of the same branch of the tree. i.e. most are highly local to that part of the tree. Any time insufficient TT space is available these entries will be overwritten before they can be of much use.

I believe that the cache entries in most programs are too small. They don't contain enough information for a replacement algorithm to replace only the least effective entry in a cache bucket. This isn't much of a problem if there is only one entry per cache bucket, but then this is by far, the least effective bucket size, so this only pertains to TTs with multiple entries per bucket.

The point of any replacement algorithm is to ensure that the entries that are kept provide the maximum reduction in time to depth. This, to some extent, is related to the size of the searched tree, which can be affected by cut-offs, bound changes, and move ordering.

Knowing things like, is the position the cache entry represents still possible given the current board position, how long an entry has been in the cache, how many times has it been referenced, how many times has it caused a cutoff/ bound change or suggested the next move to search are import when considering replacing an entry.

Making the TT entries large enough to hold such information, even if only for testing purposes, would seem to be a good first step in being able to measure how well a replacement algorithm is performing. Once direct and detailed measurements of TT performance can be made, replacement algorithms can be directly compared. Only the data that is vital to improved TT performance need be retained once a replacement algorithm has been decided upon.

For programs like Komodo, SF, and Houdini this would seem to be a no brainer. For lesser programs that are just implementing a TT or for those that are dissatisfied with their current TT performance this would also seem like a good use of time.

Regards,

Forrest
Only 2 defining forces have ever offered to die for you.....Jesus Christ and the American Soldier. One died for your soul, the other for your freedom.
Kohflote
Posts: 219
Joined: Wed Sep 19, 2007 11:07 am
Location: Singapore

Re: Too much Hash harms?

Post by Kohflote »

I notice 2 things when I increase hash size (from 64MB to 2GB) for Komodo 8:

(1) reaches ply slower
(2) 100% cpu utilization reaches much slower.

Best regards,
Koh, Kah Huat
syzygy
Posts: 5557
Joined: Tue Feb 28, 2012 11:56 pm

Re: Too much Hash harms?

Post by syzygy »

Kohflote wrote:I notice 2 things when I increase hash size (from 64MB to 2GB) for Komodo 8:

(1) reaches ply slower
Perhaps because of (2) and otherwise because of the nps drop all engines experience caused by TLB misses. If you look at the total number of nodes required to reach a particular depth, you should see that that number decreases as hash is increased (up to some point, but the longer you search, the higher the benefit).
(2) 100% cpu utilization reaches much slower.
That is because of the way Komodo allocates its hash memory. This delay is not necessary, just page the whole thing in before the search starts.
mjlef
Posts: 1494
Joined: Thu Mar 30, 2006 2:08 pm

Re: Too much Hash harms?

Post by mjlef »

Some data using Stockfish 3 here:

http://www.fastgm.de/hash.html

basically, Hash too small is a lot worse than Hash too big. I estimate a proper hash size for 1 minute games using the nps reported for Stockfish on the 64 Thread server is 64 GB-128 GB. So DeepMind set the hash way too small.

It is also unclear to me that they used 64 real cores instead of 64 hyperthread cores, which also would reduce Stockfish strength (although increase the nps). Assuming a NUMA machine, the NUMA bus can also get swamped with sending Hash information to all the other nodes. This is not bad in a 2 NUMA node machine but can be a big limit in higher NUMA node machines.

I hope they better document what they did in the longer paper they said they are preparing. My email to them asking about this has not been answered so far.

Mark
Werewolf
Posts: 1795
Joined: Thu Sep 18, 2008 10:24 pm

Re: Too much Hash harms?

Post by Werewolf »

mjlef wrote:Some data using Stockfish 3 here:

http://www.fastgm.de/hash.html

basically, Hash too small is a lot worse than Hash too big. I estimate a proper hash size for 1 minute games using the nps reported for Stockfish on the 64 Thread server is 64 GB-128 GB. So DeepMind set the hash way too small.

It is also unclear to me that they used 64 real cores instead of 64 hyperthread cores, which also would reduce Stockfish strength (although increase the nps). Assuming a NUMA machine, the NUMA bus can also get swamped with sending Hash information to all the other nodes. This is not bad in a 2 NUMA node machine but can be a big limit in higher NUMA node machines.

I hope they better document what they did in the longer paper they said they are preparing. My email to them asking about this has not been answered so far.

Mark
Everything you're saying echos my own thoughts on this.

It keeps SF somewhat handicapped whilst allowing the marketing blurb: "Stockfish on 64 threads at 80 Million nps beaten easily!" (or whatever it was)
Jouni
Posts: 3283
Joined: Wed Mar 08, 2006 8:15 pm

Re: Too much Hash harms?

Post by Jouni »

BTW they are just testing different hash for 180+1.8 th 1 time control in SF framework. And 256 MB so far weaker than 64 MB :o .
Jouni
Damir
Posts: 2801
Joined: Mon Feb 11, 2008 3:53 pm
Location: Denmark
Full name: Damir Desevac

Re: Too much Hash harms?

Post by Damir »

The more Hash you use the slower engine you are gonna get.... :) :)
Uri Blass
Posts: 10279
Joined: Thu Mar 09, 2006 12:37 am
Location: Tel-Aviv Israel

Re: Too much Hash harms?

Post by Uri Blass »

Jouni wrote:BTW they are just testing different hash for 180+1.8 th 1 time control in SF framework. And 256 MB so far weaker than 64 MB :o .
I do not see where they test 256 hash against 64 hash.

They tested 64 hash against 64 hash or 256 hash against 256 hash.
Jouni
Posts: 3283
Joined: Wed Mar 08, 2006 8:15 pm

Re: Too much Hash harms?

Post by Jouni »

OK my error.
Jouni
CheckersGuy
Posts: 273
Joined: Wed Aug 24, 2016 9:49 pm

Re: Too much Hash harms?

Post by CheckersGuy »

were those tests done with LP on or off ?