7 piece tablebases... when?

Discussion of anything and everything relating to chess playing software and machines.

Moderators: hgm, Rebel, chrisw

bob
Posts: 20943
Joined: Mon Feb 27, 2006 7:30 pm
Location: Birmingham, AL

Re: 7 piece tablebases... when?

Post by bob »

Kirill Kryukov wrote:
bob wrote:
Arpad Rusz wrote:An interesting project by Kirill Kryukov:
http://kirill-kryukov.com/chess/discuss ... f=6&t=5815
It is one thing to do 7 piece files, it is another thing to store them, and yet another thing to use them in a live search. I am not sure technology is "there" today, both from a size and a speed perspective. And I am not sure they are ever going to be really useful, other than for specific endgame studies.
Thanks to Arpad for bringing it up here.

I think it's clear now that many people find 6-piece tablebases useful. Some for playing, some for composing, many more for analyzing. Many choose to have only part of the 6-piece set, so the size of the whole set is not too important. So, if we could already have KRPPKRP, or KPPPKPP now, how many of us would refuse? Certainly you'd find space on your hard drive for those table, if they were available.
Hard to say. Those types of files (KPPPKPP) are on the order of 8tb per pair before compression assuming 1 byte for DTC rather than DTM Double that for DTM. That's a daunting thing to store, to download, to probe / cache in a search, today. How many terabytes do you have sitting around? :)



Personally I believe the technology is already here for producing and sharing the whole 7-piece set. The key is to not try building all tables by yourself (traditional approach), but involve the community. So developing the efficient infrastructure for this project is as important as the generator itself.
Today's internet backbone, even if you can get on the 1gbit backbone locally, is not up to this task. This is not a simple distributed SETI online project. The data is computed sequentially and you have to finish one pass before starting the next. Not so easy to distribute, particularly outside a lab with a really good network like Infiniband or 10gbit.


If the key specs for the generator and the infrastructure are designed by the community, and if sufficient motivation is provided for developers to comply to those specs, the end result will be practically a paradise on earth - a complete 7-piece solution available to anyone at any time. How much of this will come true is up to the community, which means all of us.
The usefullness is greatly limited when you have to go across a network to access the data. You are not going to use that in a search, for certain.


Note. The infrastructure should provide support for three tasks: 1. Distributed generation of the tables. 2. Distribution of the completed tables. 3. Remote probing. So those who can't store the local copy of the tables will still have access to the whole 7-piece solution, just at a much slower speed.

Note 2. When building a DTZ (or DTZ50) table, the sub-endgame tables can be in WDL (or WDL50). This means that when the whole 4-vs-3 set is solved in WDL, you can suddenly build KPPPKPP in DTZ without building any other DTZ first. This property of DTZ will significantly accelerate our path to the useful tables.
bob
Posts: 20943
Joined: Mon Feb 27, 2006 7:30 pm
Location: Birmingham, AL

Re: 7 piece tablebases... when?

Post by bob »

kranium wrote:
kranium wrote:
Terry McCracken wrote:
kranium wrote:
Peter Skinner wrote:You also have to remember, the engine itself has to be programmer to use the 6 man and 7 man tbs.

If they aren't, then they only find the 3-4-5 man bases and use those. Just because you have downloaded them, doesn't mean your engine _uses_ them.

Crafty can be compiled to use the 6 man bases. Which others use them?

Where does one even download the 6 man bases anymore?
they have been available for Ippolit engines since Nov 11, 2010...

http://ippolit.wikispaces.com/TotalBases+Download
http://ippolit.wikispaces.com/TripleBases+Download
Do they really work correctly? You would need a huge HDD and very fast.

In memory it would be crazy, terabytes! Very expensive.
Hi Terry,

Yes, they work very well.
The IvanHoe developers have occupied themselves for the last year with Robbobase improvements, bug fixes, and further development, etc., instead of ELO increases, much to the chagrin of many.

And many users have downloaded the complete set (including blocked) and use them for analysis...

As you say, they do demand ample HD space...the 6 piece alone bases take up ~ 700 GB,
luckily enough, HDD space is also increasing at a tremendous (exponential?) rate.
many new systems today have 1 Terrabyte or more...? How much does a fast internal 1 Terrabyte drives cost?
(it's not much, IMO).

Immortal forum has much more information regarding this...

Norm
What's really exciting about the Robbobase development is some of the the new 'bulk load' code, and consequent UCI options:

the user can specify a directory to 'bulk load'....
so, on a system with oodles of RAM, the data can be loaded into memory...with subsequent lightning fast access.
So, goodbye slow disk access, and thrashing...

from ROBBO_TRIPLE_INFO (inlcuded with IvanHoe999947c):

"setoption name RobboTripleBulkLoadThisDirectory value /media/disk/RobboTripleBase/5"
This option will BulkLoad the entire 5/ directory. The same can do for Z/ and 33/ and any others. Then they will sit in RAM and not any longer in the Dynamic system. The 5/ directory can be around 570MB. You can Detach with the complement RobboTripleBulkDetachThisDirectory and then will evict from RAM but stay in Dynamic load from disk."

Time for all of us to get a 'dedicated chess monster'...with many cores, tons of RAM ,and huge HDD's to hold the huge 'databases' of chess (endings and more?) which is inevitably coming...
'Chapeau' to the IvanHoe developers!
You can forget about "bulk loading" for 6 piece files. Not many machines even have 64 gb of memory, much less terabytes. And for the 7's you need terabytes...
bob
Posts: 20943
Joined: Mon Feb 27, 2006 7:30 pm
Location: Birmingham, AL

Re: 7 piece tablebases... when?

Post by bob »

Dann Corbit wrote:
UncombedCoconut wrote:
Milos wrote: I don't know how you calculated it, but 2^36 is only 64GB which is ridiculously small.
For once Terry wrote something meaningful.
The new non-volatile very fast access storage technology will become reality in next 2-3 years on enterprise servers and in less than 10 in wide consumer market and it will make Flash look like a bad joke. It will have under 1us write cycle and around 100ns read cycle, therefore making 10M reads per second reality which is faster than today's NPS's even on fastest machines.
64GB is ridiculously small? Good grief... would you care to trade incomes?
I'm not quite sure which Flash replacement you're talking about. Is it the memristor system that HP is developing?
http://www.frys.com/product/5833623
80 GB is $160
I don't think that qualifies as 'wealthy'
You left off the important part of the math. 80gb will hold maybe _one_ egtb file. With 7 piece files, there are a bunch. You'd need thousands of those things, so suddenly it does become expensive.
User avatar
Kirill Kryukov
Posts: 494
Joined: Sun Mar 19, 2006 4:12 am
Full name: Kirill Kryukov

Re: 7 piece tablebases... when?

Post by Kirill Kryukov »

bob wrote:Hard to say. Those types of files (KPPPKPP) are on the order of 8tb per pair before compression assuming 1 byte for DTC rather than DTM Double that for DTM. That's a daunting thing to store, to download, to probe / cache in a search, today. How many terabytes do you have sitting around? :)
DTM and DTC are challenging metrics, but more practical ones do exist. The better order would be to do the light metrics first (WDL & WDL50), then DTZ and DTZ50 for selected endgames, then DTM for everything if we feel like we're up to it.
bob wrote:Today's internet backbone, even if you can get on the 1gbit backbone locally, is not up to this task. This is not a simple distributed SETI online project. The data is computed sequentially and you have to finish one pass before starting the next. Not so easy to distribute, particularly outside a lab with a really good network like Infiniband or 10gbit.
WDL & WDL50 should be practical to distribute even today. Selected DTZ & DTZ50 - too. And what next - we'll have to live and see. Also there's always an option of snail-mailing some hard drives around the globe. :-)
bob wrote:The usefullness is greatly limited when you have to go across a network to access the data.
Limited usefulness may still be better than nothing. The more important tables you can keep locally, the more exotic ones - access via network. Also the whole set in WDL / WDL50 should be possible to store locally.
bob wrote:You are not going to use that in a search, for certain.
Aren't you afraid that someone will save this quote, and a few years later it will sound like Gates' "640 kb of RAM should be enough for everyone"? :-)
Dann Corbit
Posts: 12662
Joined: Wed Mar 08, 2006 8:57 pm
Location: Redmond, WA USA

Re: 7 piece tablebases... when?

Post by Dann Corbit »

bob wrote:
Dann Corbit wrote:
UncombedCoconut wrote:
Milos wrote: I don't know how you calculated it, but 2^36 is only 64GB which is ridiculously small.
For once Terry wrote something meaningful.
The new non-volatile very fast access storage technology will become reality in next 2-3 years on enterprise servers and in less than 10 in wide consumer market and it will make Flash look like a bad joke. It will have under 1us write cycle and around 100ns read cycle, therefore making 10M reads per second reality which is faster than today's NPS's even on fastest machines.
64GB is ridiculously small? Good grief... would you care to trade incomes?
I'm not quite sure which Flash replacement you're talking about. Is it the memristor system that HP is developing?
http://www.frys.com/product/5833623
80 GB is $160
I don't think that qualifies as 'wealthy'
You left off the important part of the math. 80gb will hold maybe _one_ egtb file. With 7 piece files, there are a bunch. You'd need thousands of those things, so suddenly it does become expensive.
I agree, of course. I was addressing:
"64GB is ridiculously small? Good grief"

On the other, other hand -- memory size (though tragically not speed) also increases exponentially over time per dollar. So in ten years, we will see 80 TB for less than $200 (if I have my Nostradamus hat on straight).
bob
Posts: 20943
Joined: Mon Feb 27, 2006 7:30 pm
Location: Birmingham, AL

Re: 7 piece tablebases... when?

Post by bob »

Kirill Kryukov wrote:
bob wrote:Hard to say. Those types of files (KPPPKPP) are on the order of 8tb per pair before compression assuming 1 byte for DTC rather than DTM Double that for DTM. That's a daunting thing to store, to download, to probe / cache in a search, today. How many terabytes do you have sitting around? :)
DTM and DTC are challenging metrics, but more practical ones do exist. The better order would be to do the light metrics first (WDL & WDL50), then DTZ and DTZ50 for selected endgames, then DTM for everything if we feel like we're up to it.
bob wrote:Today's internet backbone, even if you can get on the 1gbit backbone locally, is not up to this task. This is not a simple distributed SETI online project. The data is computed sequentially and you have to finish one pass before starting the next. Not so easy to distribute, particularly outside a lab with a really good network like Infiniband or 10gbit.
WDL & WDL50 should be practical to distribute even today. Selected DTZ & DTZ50 - too. And what next - we'll have to live and see. Also there's always an option of snail-mailing some hard drives around the globe. :-)
bob wrote:The usefullness is greatly limited when you have to go across a network to access the data.
Limited usefulness may still be better than nothing. The more important tables you can keep locally, the more exotic ones - access via network. Also the whole set in WDL / WDL50 should be possible to store locally.
bob wrote:You are not going to use that in a search, for certain.
Aren't you afraid that someone will save this quote, and a few years later it will sound like Gates' "640 kb of RAM should be enough for everyone"? :-)
Remember, you are talking to the person that doesn't even use 3-4-5 piece files any longer in the tournaments I play in. I ran a cluster test to see if EGTBs help. At my short time controls, they were worse. But I did not have time to run longer matches, and copying the 3-4-5 piece files back to a node when I use it is painful (files are deleted from a node after a set of N (typically 16) games have completed...

It appears that for fast games, egtb slowdown is more significant than the small gain from exact scores, since many of the EGTB endings can be played as well without the tables as with (KQ vs KR is trivial, for example)...

BTW, as far as the infamous Bill Gates quote goes, at least by the time my statement is proven wrong, I won't be alive. :) It will be a few years.
Milos
Posts: 4190
Joined: Wed Nov 25, 2009 1:47 am

Re: 7 piece tablebases... when?

Post by Milos »

UncombedCoconut wrote:64GB is ridiculously small? Good grief... would you care to trade incomes?
I'm not quite sure which Flash replacement you're talking about. Is it the memristor system that HP is developing?
64GB is certainly small when you talk about hundreds of terabytes of necessary storage space.
Flash replacement will be PCM (phase change memory, you can google it), HP (with its memristor) is a small player in that kind of development (certainly behind the big three ;)). Too much hype too small commercialization perspective...