peter wrote:The time for saving and loading is not the problem.
The problem is, you have always new OR old hash, principally it's the same with overwriting by ongoing analysis you must deal with anyhow.
Only being able to merge new and old hash would help with that
When it take 10 minutes to load one hash and to save it, and you have 4-5 positions (that means 4-5 hashes 2 GB size), it becomes a problem... I usually load the last saved hash for an opponent, I run an analysis (which probably will overwrite many things before that game end) and when I decide the move I save the hash.
I guess you mean hashes should be merged and pruned. If not, they'd become huge, at least for my misere 4 GB RAM PC.
SF PA- GTB had the UCI command Merge Hash but it was for its learning file.
At the moment for all engines having a Save Hash- Option, reload means deleting of the hash in use. To merge e.g. 8G saved hash with 8G already in use shouldn't take longer than to reload 8, or would it? Of course I don't really know not being able to program that by myself.
Neither do I understand the sense of loading .epd- files into hash. If the positions aren't evaluated by the engine already, what does the engine earn?
10 minutes to save 2G seems pretty much to me, I guess that's rather about 10 seconds at my old 12 core Intel, and I don't think it's a question of CPU mainly.
The speed of the disk (SSD) and of the RAM might matter more
peter wrote:SF PA- GTB had the UCI command Merge Hash but it was for its learning file.
At the moment for all engines having a Save Hash- Option, reload means deleting of the hash in use. To merge e.g. 8G saved hash with 8G already in use shouldn't take longer than to reload 8, or would it? Of course I don't really know not being able to program that by myself.
Neither do I understand the sense of loading .epd- files into hash. If the positions aren't evaluated by the engine already, what does the engine earn?
10 minutes to save 2G seems pretty much to me, I guess that's rather about 10 seconds at my old 12 core Intel, and I don't think it's a question of CPU mainly.
The speed of the disk (SSD) and of the RAM might matter more
SF PA GTB had no problems with PH file size. It usually is few MBs, and when merging with new positions its size increased. If you merge 8 GBs hash with 8 GBs file you get a 16 GBs file, I guess. Unless you can somehow "prune" it. Then, we have the same problem as losing some key variation scores when in analysis: you have to cut something anyway. Shallower positions, I guess.
I think Dann asked for an import epd feature because he has a lot of positions already evaluated. Maybe he wants to run some minmax.
About the time it needs to load/save.... I always doubt the process could be incomplete and I wait. I check if there're disk accesses and I'm never sure it's complete. Maybe a message "hashes loaded/hashes saved" is useful to be sure about it...
Rodolfo Leoni wrote:If you merge 8 GBs hash with 8 GBs file you get a 16 GBs file, I guess. Unless you can somehow "prune" it. Then, we have the same problem as losing some key variation scores when in analysis: you have to cut something anyway.
Engine will do that with further progress of computing as well as with 32G, what is most of the times the amount of hash I use.
Of course there can be more already evaluated positions and moves taken into comparison the bigger the hash is.
And why not merge 1G old with 1G new hash for short times of saving?
Undoubted single best moves found only after some time of search yet would be kept in saved hash once found still.
And merging yet should not take more time then reloading does, I'm still hoping so at least.
Rodolfo Leoni wrote:If you merge 8 GBs hash with 8 GBs file you get a 16 GBs file, I guess. Unless you can somehow "prune" it. Then, we have the same problem as losing some key variation scores when in analysis: you have to cut something anyway.
Engine will do that with further progress of computing as well as with 32G, what is most of the times the amount of hash I use.
Of course there can be more already evaluated positions and moves taken into comparison the bigger the hash is.
And why not merge 1G old with 1G new hash for short times of saving?
Undoubted single best moves found only after some time of search yet would be kept in saved hash once found still.
And merging yet should not take more time then reloading does, I'm still hoping so at least.
Yes, I guess it's an interesting option for those whose PC has a lot of RAM. I hope Daniel will consider that option. And I hope all that stuff isn't taking too much time.
is your nps 50956798 which is very fast and quicker than threadripper.? May I ask what sytem you are using?
Not sure what your first sentence is trying to ask. Yes, the nodes/sec is 50956798, which is indeed very fast. I don't know anything about Threadripper. I have 2 x Intel(R) Xeon(R) CPU E5-2687W v3 @ 3.10GHz and 64GB RAM. I tend to use asmFish because it is NUMA-aware, which seems to improve performance on my system.
the following position is a mater in 33 and is the Longest mate in King, Bishop and Knight versus King endgame.
it takes me a long time on my computer to find the mate in 33. how long do you estimate it should take on your computer .just 10 minutes ?
Merging hashes is an esoteric concept, as hash has no full info of the positions, i.e. you don't know the positions that are stored. You only know during search that the keys match.