Stockfish version with hash saving capability

Discussion of anything and everything relating to chess playing software and machines.

Moderators: hgm, Rebel, chrisw

duncan
Posts: 12038
Joined: Mon Jul 07, 2008 10:50 pm

Re: Stockfish version with hash saving capability

Post by duncan »

Dann Corbit wrote: I figure with a billion entries in the hash, it might have a terrific advantage. Further, after it minimaxes for a while, the hash should get stronger and stronger (in theory). I imagine that a lot of good entries will get aged out of the table, though.
looks exciting. great that you are doing this.
Rodolfo Leoni
Posts: 545
Joined: Tue Jun 06, 2017 4:49 pm
Location: Italy

Re: Stockfish version with hash saving capability

Post by Rodolfo Leoni »

Dann Corbit wrote: I had to fiddle with it to make it work in a way that I understood.

It loaded 4 million EPD rows in 22 seconds.
In the first game, it got a draw with Asmfish.

I will need a deeper pile of EPD for sure.

I saw an advantage develop for the hash loaded version, but it got slowly drained away. With only 4 million hash entries loaded, it would be pretty sparse in reality.

I figure with a billion entries in the hash, it might have a terrific advantage. Further, after it minimaxes for a while, the hash should get stronger and stronger (in theory). I imagine that a lot of good entries will get aged out of the table, though.
I think you could avoid entries aging effects if you expand the "never clear hash" code to any TC level. Just matter to work on the code. Carpentry mode, of course. ;)
F.S.I. Chess Teacher
giovanni
Posts: 142
Joined: Wed Jul 08, 2015 12:30 pm

Re: Stockfish version with hash saving capability

Post by giovanni »

cdani wrote:As I think many people will not see it as is buried in a long thread, I publish here this again.

I added to Stockfish the capability of saving the full hash to file, to allow the user to recover a previous analysis session and continue it. It has the same added uci options than in Andscacs.
The saved hash file will be of the same size of the hash memory, so if you defined 4 GB of hash, such will be the file size. Saving and loading such big files can take some time.

Source and executable:
www.andscacs.com/downloads/stockfish_x6 ... vehash.zip

To be able to do it I have added 4 new uci parameters:

option name NeverClearHash type check default false
option name HashFile type string default hash.hsh
option name SaveHashtoFile type button
option name LoadHashfromFile type button

You can set the NeverClearHash option to avoid that the hash could be cleared by a Clear Hash or ucinewgame command.
The HashFile parameter is the full file name with path information. If you don't set the path, it will be saved in the current folder. It defaults to hash.hsh.
To save the hash, stop the analysis and press the SaveHashtoFile button in the uci options screen of the GUI.
To load the hash file, load the game you are interested in, load the engine withouth starting it, and press the LoadHashfromFile button in the uci options screen of the GUI. Now you can start the analysis.

I have not tested it much. Anything just tell me.

The modified code is between
//dani170724
and
//enddani170724
Hi Daniel. Do you think that it would be possible to add the possibility to save the hash table as epd positions or as a book in bin format for offline viewing?
Thanks in advance.
Giovanni
Dann Corbit
Posts: 12538
Joined: Wed Mar 08, 2006 8:57 pm
Location: Redmond, WA USA

Re: Stockfish version with hash saving capability

Post by Dann Corbit »

Rodolfo Leoni wrote:
Dann Corbit wrote: I had to fiddle with it to make it work in a way that I understood.

It loaded 4 million EPD rows in 22 seconds.
In the first game, it got a draw with Asmfish.

I will need a deeper pile of EPD for sure.

I saw an advantage develop for the hash loaded version, but it got slowly drained away. With only 4 million hash entries loaded, it would be pretty sparse in reality.

I figure with a billion entries in the hash, it might have a terrific advantage. Further, after it minimaxes for a while, the hash should get stronger and stronger (in theory). I imagine that a lot of good entries will get aged out of the table, though.
I think you could avoid entries aging effects if you expand the "never clear hash" code to any TC level. Just matter to work on the code. Carpentry mode, of course. ;)
I think that never clear hash is too simple to work right.
I believe that a second pv only hash will be necessary.

With 64 cores, at long time control, the hash fills up in a surprisingly short time. When that happens, you can see the performance degrade.

If you figure that one core can compute about a million nodes in a second, then 64 cores can compute 64 million (many of them dups, of course).

If you simply do not replace the hash nodes at all, eventually the hash table will be filled with useless data. Consider hash nodes with 32 men in them when there are 16 men left on the board. What good are they doing? None with the current search, though they might have been hot stuff with a prior search.

Now, there is some magic in storing pv nodes in a special place because pv nodes are so rare. I think it might be worthwhile to store pv nodes by material signature.

There are a lot of things to think about with the stored computation strategy. I do not think we have sorted it out properly yet. But we will eventually.
Taking ideas is not a vice, it is a virtue. We have another word for this. It is called learning.
But sharing ideas is an even greater virtue. We have another word for this. It is called teaching.
User avatar
cdani
Posts: 2204
Joined: Sat Jan 18, 2014 10:24 am
Location: Andorra

Re: Stockfish version with hash saving capability

Post by cdani »

giovanni wrote: Hi Daniel. Do you think that it would be possible to add the possibility to save the hash table as epd positions or as a book in bin format for offline viewing?
Thanks in advance.
Giovanni
Not really, as the hash memory does not have information about the positions, only enough information to match the current position of the engine.
Rodolfo Leoni
Posts: 545
Joined: Tue Jun 06, 2017 4:49 pm
Location: Italy

Re: Stockfish version with hash saving capability

Post by Rodolfo Leoni »

Dann Corbit wrote: I think that never clear hash is too simple to work right.
I believe that a second pv only hash will be necessary.

With 64 cores, at long time control, the hash fills up in a surprisingly short time. When that happens, you can see the performance degrade.

If you figure that one core can compute about a million nodes in a second, then 64 cores can compute 64 million (many of them dups, of course).

If you simply do not replace the hash nodes at all, eventually the hash table will be filled with useless data. Consider hash nodes with 32 men in them when there are 16 men left on the board. What good are they doing? None with the current search, though they might have been hot stuff with a prior search.

Now, there is some magic in storing pv nodes in a special place because pv nodes are so rare. I think it might be worthwhile to store pv nodes by material signature.

There are a lot of things to think about with the stored computation strategy. I do not think we have sorted it out properly yet. But we will eventually.
The option works fine for position analysis. It's performing excellent for my purposes. But I agree, it's a shortcut and you'll keep your epd archive into hashes until a deeper search depth is reached. After that point you'll start losing everything.

SF PA GTB was great because pv nodes were kept in a separate hash. Critter worked fine too, except it had no import positions feature.

There's a project of updating SF PA GTB with more modern and strong code, but it's just started. There already are progresses, as it's no more "GTB". It became kind of "Stockfish PA Syzygy" and it seems to work fine. A lot of things to do, anyway. :)
F.S.I. Chess Teacher
Dann Corbit
Posts: 12538
Joined: Wed Mar 08, 2006 8:57 pm
Location: Redmond, WA USA

Re: Stockfish version with hash saving capability

Post by Dann Corbit »

If the hash code is standardized like that for the polyglot book, it should be possible to collect some information from the hash, similar to the way that the polyglot book can be scanned.
Taking ideas is not a vice, it is a virtue. We have another word for this. It is called learning.
But sharing ideas is an even greater virtue. We have another word for this. It is called teaching.
User avatar
cdani
Posts: 2204
Joined: Sat Jan 18, 2014 10:24 am
Location: Andorra

Re: Stockfish version with hash saving capability

Post by cdani »

This is the info in an entry:

Code: Select all

  uint16_t key16;
  uint16_t move16;
  int16_t  value16;
  int16_t  eval16;
  uint8_t  genBound8;
  int8_t   depth8;
No way to know the position.

Think that is optimized for searching, not for user's work.
User avatar
Zerbinati
Posts: 122
Joined: Mon Aug 18, 2014 7:12 pm
Location: Trento (Italy)

Re: Stockfish version with hash saving capability

Post by Zerbinati »

Has anyone tried to reload the hash correctly?
only seemingly stores useful hash of >2G still, from 4G upward there is the file of according amount in Explorer, it's moved in Task Manager, but it's obviously empty as for entries that are at once bringing back already "found"


Original code Daniel generates warning by compiling in with MinGW

Code: Select all

 -O3 -DIS_64BIT -msse -msse3 -mpopcnt -DUSE_POPCNT -flto   -c -o syzygy/tbprobe.
o syzygy/tbprobe.cpp
tt.cpp: In member function 'bool TranspositionTable::save()':
tt.cpp:216:27: warning: comparison between signed and unsigned integer expressio
ns [-Wsign-compare]
   for &#40;long long i = 0; i < clusterCount * sizeof&#40;Cluster&#41;; i += &#40;1 << 30&#41;) &#123; /
/1GB
                           ^
x86_64-w64-mingw32-g++ -o sugar benchmark.o bitbase.o bitboard.o book.o endgame.
This patch resolve warning but it does not seem to work Loadhashfile over 2gb .. Tips?

Code: Select all

bool TranspositionTable&#58;&#58;save&#40;) &#123;
	std&#58;&#58;ofstream b_stream&#40;hashfilename,
		std&#58;&#58;fstream&#58;&#58;out | std&#58;&#58;fstream&#58;&#58;binary&#41;;
	if &#40;b_stream&#41;
	&#123;
		//b_stream.write&#40;reinterpret_cast<char const *>&#40;table&#41;, clusterCount * sizeof&#40;Cluster&#41;);
		for &#40;unsigned long long i = 0; i < clusterCount * sizeof&#40;Cluster&#41;; i += &#40;1 << 30&#41;) &#123; //1GB
#ifndef __min
        #define __min&#40;a,b&#41; ((&#40;a&#41; < &#40;b&#41;) ? &#40;a&#41; &#58; &#40;b&#41;)
#endif
		    unsigned long long j = __min&#40;&#40;1 << 30&#41;, &#40;clusterCount * sizeof&#40;Cluster&#41;) - i&#41;;
			b_stream.write&#40;reinterpret_cast<char const *>&#40;table&#41; + i, j&#41;;
		&#125;
		return &#40;b_stream.good&#40;));
	&#125;
	return false;
&#125;
User avatar
Zerbinati
Posts: 122
Joined: Mon Aug 18, 2014 7:12 pm
Location: Trento (Italy)

Re: Stockfish version with hash saving capability

Post by Zerbinati »

This is Original Code di Daniel

Code: Select all

bool TranspositionTable&#58;&#58;save&#40;) &#123;
	std&#58;&#58;ofstream b_stream&#40;hashfilename,
		std&#58;&#58;fstream&#58;&#58;out | std&#58;&#58;fstream&#58;&#58;binary&#41;;
	if &#40;b_stream&#41;
	&#123;
		//b_stream.write&#40;reinterpret_cast<char const *>&#40;table&#41;, clusterCount * sizeof&#40;Cluster&#41;);
		for &#40;long long i = 0; i < clusterCount * sizeof&#40;Cluster&#41;; i += &#40;1 << 30&#41;) &#123; //1GB
			long long j = __min&#40;&#40;1 << 30&#41;, &#40;clusterCount * sizeof&#40;Cluster&#41;) - i&#41;;
			b_stream.write&#40;reinterpret_cast<char const *>&#40;table&#41; + i, j&#41;;
		&#125;
		return &#40;b_stream.good&#40;));
	&#125;
	return false;
&#125;