looks exciting. great that you are doing this.Dann Corbit wrote: I figure with a billion entries in the hash, it might have a terrific advantage. Further, after it minimaxes for a while, the hash should get stronger and stronger (in theory). I imagine that a lot of good entries will get aged out of the table, though.
Stockfish version with hash saving capability
Moderators: hgm, Rebel, chrisw
-
- Posts: 12038
- Joined: Mon Jul 07, 2008 10:50 pm
Re: Stockfish version with hash saving capability
-
- Posts: 545
- Joined: Tue Jun 06, 2017 4:49 pm
- Location: Italy
Re: Stockfish version with hash saving capability
I think you could avoid entries aging effects if you expand the "never clear hash" code to any TC level. Just matter to work on the code. Carpentry mode, of course.Dann Corbit wrote: I had to fiddle with it to make it work in a way that I understood.
It loaded 4 million EPD rows in 22 seconds.
In the first game, it got a draw with Asmfish.
I will need a deeper pile of EPD for sure.
I saw an advantage develop for the hash loaded version, but it got slowly drained away. With only 4 million hash entries loaded, it would be pretty sparse in reality.
I figure with a billion entries in the hash, it might have a terrific advantage. Further, after it minimaxes for a while, the hash should get stronger and stronger (in theory). I imagine that a lot of good entries will get aged out of the table, though.
F.S.I. Chess Teacher
-
- Posts: 142
- Joined: Wed Jul 08, 2015 12:30 pm
Re: Stockfish version with hash saving capability
Hi Daniel. Do you think that it would be possible to add the possibility to save the hash table as epd positions or as a book in bin format for offline viewing?cdani wrote:As I think many people will not see it as is buried in a long thread, I publish here this again.
I added to Stockfish the capability of saving the full hash to file, to allow the user to recover a previous analysis session and continue it. It has the same added uci options than in Andscacs.
The saved hash file will be of the same size of the hash memory, so if you defined 4 GB of hash, such will be the file size. Saving and loading such big files can take some time.
Source and executable:
www.andscacs.com/downloads/stockfish_x6 ... vehash.zip
To be able to do it I have added 4 new uci parameters:
option name NeverClearHash type check default false
option name HashFile type string default hash.hsh
option name SaveHashtoFile type button
option name LoadHashfromFile type button
You can set the NeverClearHash option to avoid that the hash could be cleared by a Clear Hash or ucinewgame command.
The HashFile parameter is the full file name with path information. If you don't set the path, it will be saved in the current folder. It defaults to hash.hsh.
To save the hash, stop the analysis and press the SaveHashtoFile button in the uci options screen of the GUI.
To load the hash file, load the game you are interested in, load the engine withouth starting it, and press the LoadHashfromFile button in the uci options screen of the GUI. Now you can start the analysis.
I have not tested it much. Anything just tell me.
The modified code is between
//dani170724
and
//enddani170724
Thanks in advance.
Giovanni
-
- Posts: 12538
- Joined: Wed Mar 08, 2006 8:57 pm
- Location: Redmond, WA USA
Re: Stockfish version with hash saving capability
I think that never clear hash is too simple to work right.Rodolfo Leoni wrote:I think you could avoid entries aging effects if you expand the "never clear hash" code to any TC level. Just matter to work on the code. Carpentry mode, of course.Dann Corbit wrote: I had to fiddle with it to make it work in a way that I understood.
It loaded 4 million EPD rows in 22 seconds.
In the first game, it got a draw with Asmfish.
I will need a deeper pile of EPD for sure.
I saw an advantage develop for the hash loaded version, but it got slowly drained away. With only 4 million hash entries loaded, it would be pretty sparse in reality.
I figure with a billion entries in the hash, it might have a terrific advantage. Further, after it minimaxes for a while, the hash should get stronger and stronger (in theory). I imagine that a lot of good entries will get aged out of the table, though.
I believe that a second pv only hash will be necessary.
With 64 cores, at long time control, the hash fills up in a surprisingly short time. When that happens, you can see the performance degrade.
If you figure that one core can compute about a million nodes in a second, then 64 cores can compute 64 million (many of them dups, of course).
If you simply do not replace the hash nodes at all, eventually the hash table will be filled with useless data. Consider hash nodes with 32 men in them when there are 16 men left on the board. What good are they doing? None with the current search, though they might have been hot stuff with a prior search.
Now, there is some magic in storing pv nodes in a special place because pv nodes are so rare. I think it might be worthwhile to store pv nodes by material signature.
There are a lot of things to think about with the stored computation strategy. I do not think we have sorted it out properly yet. But we will eventually.
Taking ideas is not a vice, it is a virtue. We have another word for this. It is called learning.
But sharing ideas is an even greater virtue. We have another word for this. It is called teaching.
But sharing ideas is an even greater virtue. We have another word for this. It is called teaching.
-
- Posts: 2204
- Joined: Sat Jan 18, 2014 10:24 am
- Location: Andorra
Re: Stockfish version with hash saving capability
Not really, as the hash memory does not have information about the positions, only enough information to match the current position of the engine.giovanni wrote: Hi Daniel. Do you think that it would be possible to add the possibility to save the hash table as epd positions or as a book in bin format for offline viewing?
Thanks in advance.
Giovanni
Daniel José - http://www.andscacs.com
-
- Posts: 545
- Joined: Tue Jun 06, 2017 4:49 pm
- Location: Italy
Re: Stockfish version with hash saving capability
The option works fine for position analysis. It's performing excellent for my purposes. But I agree, it's a shortcut and you'll keep your epd archive into hashes until a deeper search depth is reached. After that point you'll start losing everything.Dann Corbit wrote: I think that never clear hash is too simple to work right.
I believe that a second pv only hash will be necessary.
With 64 cores, at long time control, the hash fills up in a surprisingly short time. When that happens, you can see the performance degrade.
If you figure that one core can compute about a million nodes in a second, then 64 cores can compute 64 million (many of them dups, of course).
If you simply do not replace the hash nodes at all, eventually the hash table will be filled with useless data. Consider hash nodes with 32 men in them when there are 16 men left on the board. What good are they doing? None with the current search, though they might have been hot stuff with a prior search.
Now, there is some magic in storing pv nodes in a special place because pv nodes are so rare. I think it might be worthwhile to store pv nodes by material signature.
There are a lot of things to think about with the stored computation strategy. I do not think we have sorted it out properly yet. But we will eventually.
SF PA GTB was great because pv nodes were kept in a separate hash. Critter worked fine too, except it had no import positions feature.
There's a project of updating SF PA GTB with more modern and strong code, but it's just started. There already are progresses, as it's no more "GTB". It became kind of "Stockfish PA Syzygy" and it seems to work fine. A lot of things to do, anyway.
F.S.I. Chess Teacher
-
- Posts: 12538
- Joined: Wed Mar 08, 2006 8:57 pm
- Location: Redmond, WA USA
Re: Stockfish version with hash saving capability
If the hash code is standardized like that for the polyglot book, it should be possible to collect some information from the hash, similar to the way that the polyglot book can be scanned.
Taking ideas is not a vice, it is a virtue. We have another word for this. It is called learning.
But sharing ideas is an even greater virtue. We have another word for this. It is called teaching.
But sharing ideas is an even greater virtue. We have another word for this. It is called teaching.
-
- Posts: 2204
- Joined: Sat Jan 18, 2014 10:24 am
- Location: Andorra
Re: Stockfish version with hash saving capability
This is the info in an entry:
No way to know the position.
Think that is optimized for searching, not for user's work.
Code: Select all
uint16_t key16;
uint16_t move16;
int16_t value16;
int16_t eval16;
uint8_t genBound8;
int8_t depth8;
Think that is optimized for searching, not for user's work.
Daniel José - http://www.andscacs.com
-
- Posts: 122
- Joined: Mon Aug 18, 2014 7:12 pm
- Location: Trento (Italy)
Re: Stockfish version with hash saving capability
Has anyone tried to reload the hash correctly?
only seemingly stores useful hash of >2G still, from 4G upward there is the file of according amount in Explorer, it's moved in Task Manager, but it's obviously empty as for entries that are at once bringing back already "found"
Original code Daniel generates warning by compiling in with MinGW
This patch resolve warning but it does not seem to work Loadhashfile over 2gb .. Tips?
only seemingly stores useful hash of >2G still, from 4G upward there is the file of according amount in Explorer, it's moved in Task Manager, but it's obviously empty as for entries that are at once bringing back already "found"
Original code Daniel generates warning by compiling in with MinGW
Code: Select all
-O3 -DIS_64BIT -msse -msse3 -mpopcnt -DUSE_POPCNT -flto -c -o syzygy/tbprobe.
o syzygy/tbprobe.cpp
tt.cpp: In member function 'bool TranspositionTable::save()':
tt.cpp:216:27: warning: comparison between signed and unsigned integer expressio
ns [-Wsign-compare]
for (long long i = 0; i < clusterCount * sizeof(Cluster); i += (1 << 30)) { /
/1GB
^
x86_64-w64-mingw32-g++ -o sugar benchmark.o bitbase.o bitboard.o book.o endgame.
Code: Select all
bool TranspositionTable::save() {
std::ofstream b_stream(hashfilename,
std::fstream::out | std::fstream::binary);
if (b_stream)
{
//b_stream.write(reinterpret_cast<char const *>(table), clusterCount * sizeof(Cluster));
for (unsigned long long i = 0; i < clusterCount * sizeof(Cluster); i += (1 << 30)) { //1GB
#ifndef __min
#define __min(a,b) (((a) < (b)) ? (a) : (b))
#endif
unsigned long long j = __min((1 << 30), (clusterCount * sizeof(Cluster)) - i);
b_stream.write(reinterpret_cast<char const *>(table) + i, j);
}
return (b_stream.good());
}
return false;
}
-
- Posts: 122
- Joined: Mon Aug 18, 2014 7:12 pm
- Location: Trento (Italy)
Re: Stockfish version with hash saving capability
This is Original Code di Daniel
Code: Select all
bool TranspositionTable::save() {
std::ofstream b_stream(hashfilename,
std::fstream::out | std::fstream::binary);
if (b_stream)
{
//b_stream.write(reinterpret_cast<char const *>(table), clusterCount * sizeof(Cluster));
for (long long i = 0; i < clusterCount * sizeof(Cluster); i += (1 << 30)) { //1GB
long long j = __min((1 << 30), (clusterCount * sizeof(Cluster)) - i);
b_stream.write(reinterpret_cast<char const *>(table) + i, j);
}
return (b_stream.good());
}
return false;
}