Syzygy / egbb discussion

Discussion of anything and everything relating to chess playing software and machines.

Moderator: Ras

Ryan Benitez
Posts: 719
Joined: Thu Mar 09, 2006 1:21 am
Location: Portland Oregon

Re: Where are you Houdart?

Post by Ryan Benitez »

syzygy wrote:
Ryan Benitez wrote:
syzygy wrote:
Ryan Benitez wrote:why I should invest an extra 80gb of space.
Compared to?
Existing bitbases solutions.
I provide a 68.2 GB 6-men "bitbase" solution. The 81.9 GB of DTZ is an optional addendum. I am not getting your point, I'm afraid.

Are you thinking of any other 6-men "bitbase" solution?

I only know of Robbobases. They are 100 GB or so with an optional 450 GB addendum.
My mistake on the space vs pieces. I simply wanted data to show that this new solution is in practice better than existing ones. I last tested bitbase and TBs back in 2005 and 2006 and found Scorpio bitbases to be the most practical solution. Until I see new data supporting otherwise I see no reason for people putting down existing solutions.
syzygy
Posts: 5774
Joined: Tue Feb 28, 2012 11:56 pm

Re: Where are you Houdart?

Post by syzygy »

mvk wrote:So for now the only alternative is then the Shredder bases? Unless I'm mistaken, I can't integrate those in my engine just the same? If they can, I wasn't aware of that and that is very nice of course.
As far as I know these 6-men Shredderbases are not available even for use with Shredder (except in private beta).

Regarding the 40 GB number, the main question for me is whether this compares to the 1-sided 157 MB for 5-men or to the 2-sided 441 MB for 5-men. 40 GB seems somewhat too high for 1-sided, but rather low for 2-sided.

For a reasonably objective technological comparison (despite the Robbo hyperbole), see this overview.
Daniel Shawul
Posts: 4186
Joined: Tue Mar 14, 2006 11:34 am
Location: Ethiopia

Re: Where are you Houdart?

Post by Daniel Shawul »

That is not objective at all but I admit it is funny. It says Scorpio bitbases has:

- Disk access only?? RAM, DISK both
- Huffman only?? It uses a self written DEFLATE algorithm similar to Zlib (LZSS + Huffman). Their RLE+Huffman probably sucks for some tablebases. My choice would be PPM that does prediction, and a Burrows Wheeler Tranform as in bzip to cutdown some GBs.
- My block size is 32kb?? It is 8kb
- Probe type is Hard only?? It is soft & hard.
- Cache type has buckets with more than 400 LRU-like caches for each file of 5 men. Their static LRU cache with fixed number of buckets probably sucks towards the endgame where only few TBs are probed. Those many LRU caches share the same memory and one LRU ordered list, so resource will be shifted to whichever file is accessed more. Having said that, I have tested their approach first when I started.
- Support is maybe?? Funny people.

That is like six mistakes for me alone.
syzygy
Posts: 5774
Joined: Tue Feb 28, 2012 11:56 pm

Re: Where are you Houdart?

Post by syzygy »

It seems the 6-men Robbo bitbases (called TripleBases, which is a far more accurate name) are 87 GB, so less than the 100 GB I mentioned above.
Daniel Shawul wrote:- My block size is 32kb?? It is 8kb
On my screen it says "32K positions".
Daniel Shawul
Posts: 4186
Joined: Tue Mar 14, 2006 11:34 am
Location: Ethiopia

Re: Where are you Houdart?

Post by Daniel Shawul »

syzygy wrote:
Daniel Shawul wrote:- My block size is 32kb?? It is 8kb
On my screen it says "32K positions".
Ok I expected kilobyte reports, but the rest still stands. The paper is written in such a way to praise their own bitbases features but entertaining nonetheless.
mvk
Posts: 589
Joined: Tue Jun 04, 2013 10:15 pm

Re: Where are you Houdart?

Post by mvk »

Daniel Shawul wrote:He is talking about 6 men:

Code: Select all

Shredder: 40Gb
Syzygy; 150Gb
Scorpio*: about 50Gb
So infact the difference is 110Gb extra space to shredder and 100Gb to scorpio. If we go by 5 men results, it looks like a total waste but one has to do the 6-men test to know since there are more cases that need attention. But Diep has used 6-men WDL alone so bitbases already has some support there too.

*In process of generation.
Seems like comparing apples and oranges here. The Syzygy WDLs are ~68GB, not 150Gb[sic].
Ryan Benitez
Posts: 719
Joined: Thu Mar 09, 2006 1:21 am
Location: Portland Oregon

Re: Where are you Houdart?

Post by Ryan Benitez »

syzygy wrote:
mvk wrote:So for now the only alternative is then the Shredder bases? Unless I'm mistaken, I can't integrate those in my engine just the same? If they can, I wasn't aware of that and that is very nice of course.
As far as I know these 6-men Shredderbases are not available even for use with Shredder (except in private beta).

Regarding the 40 GB number, the main question for me is whether this compares to the 1-sided 157 MB for 5-men or to the 2-sided 441 MB for 5-men. 40 GB seems somewhat too high for 1-sided, but rather low for 2-sided.

For a reasonably objective technological comparison (despite the Robbo hyperbole), see this overview.
In the past I have considered 6 piece TBs to be impractical compared to handling in eval. I am open to data showing otherwise. Either way the data I have seen so far implies exactly what I expected comparing 5 piece of each system. The part that made me jump in was Houdart blaming Scorpio bitbases for his poor implementation. I think we all know that Houdart was just spreading BS.
Daniel Shawul
Posts: 4186
Joined: Tue Mar 14, 2006 11:34 am
Location: Ethiopia

Re: Where are you Houdart?

Post by Daniel Shawul »

mvk wrote:
Daniel Shawul wrote:He is talking about 6 men:

Code: Select all

Shredder: 40Gb
Syzygy; 150Gb
Scorpio*: about 50Gb
So infact the difference is 110Gb extra space to shredder and 100Gb to scorpio. If we go by 5 men results, it looks like a total waste but one has to do the 6-men test to know since there are more cases that need attention. But Diep has used 6-men WDL alone so bitbases already has some support there too.

*In process of generation.
Seems like comparing apples and oranges here. The Syzygy WDLs are ~68GB, not 150Gb[sic].
The 68GB are Ok. We are talking about that 80Gb DTZ table that is there to cover up for the 0.1% cases that the engine needs to ensure a win. Houdart didn't know how to use Bitbases alone so he screamed Syzygy super blah blah. So there you go, sir, your bulk of 80Gb to cover up for your lack of skills. :) I am sure they will serve people like him well, but I spent months polishing the way bitbases are used for making progress so that is some work I did (forgive the arrognace). So that is a choice I made and worked on a solution along with many authors.

It seems also from Kai's tests Shredder needs Nalimov TBs to make progress but atleast they don'ts offer the bitbases and don't offer implementations either. Syzygy does have a sample implementation 'stockfish' that needs both to make progress so size is 150Gb. What are we going to say about Gaviota that has a soft probe but its DTM is 7Gb? Just counting what is offered as a complete solution is safer.

Even if we compare sizes of EGBBs, like I said there are choices to be made. For example Syzygy chose two sided, but has to make a move generation for each positon probe to check if a capture exists, and then probe into smaller endgames to find the capture's values. Now I could scream, "Hey but they need sub-endgames?", but that will be petty but it is things like that that were being thrown around. I tried far more that that , prediction by hard-code eval() for 5-men , a search of 1-ply,2-ply, neural nets prediction, Queenlan's rule extraction etc. Their result for me was that I was able to reduce the two sided sizes significantly but not very much the critcal one sided bitbases. So I said to hell with that, again a choick. Syzygy comes here and talks about stuff as if it is new but I guarantee nothing is new under the sun. IMO Knightdreamer's bitbases did a lot of intersting stuff back then but no one talks about them now. Or Jesper Torp's work on bitbases that have new ideas in them that everyone learned from but that the bitbases are not available. Here we are talking about implementations for god's sake!!

Most have tested many things before making a choice. For example, Syzygy does a rather cumbersome check in search of reduction of sizes by permuting 6! positons. I chose a static PKNBRQ which serves. There is nothing to bicker about here because the idea comes from somebody's work which doesn't have a bitbase. So yes everything is apples and oranges with size and access stuff. And it is very easy to brainwash chess players to use your stuff, but that only serves your ego. Why don't we all have respect for each other's work, it is not even that importan interms of Elo. Here I am forced to make a payback (though everything I say is accurate to what I belive), even though I forgot about them for years since I had my fun already. The constant bickering by the other group warrants a put up or shut up, which is ELO after all, and all seem to be equal so far ...
Last edited by Daniel Shawul on Fri Oct 25, 2013 11:58 pm, edited 2 times in total.
syzygy
Posts: 5774
Joined: Tue Feb 28, 2012 11:56 pm

Re: Where are you Houdart?

Post by syzygy »

Ryan Benitez wrote:The part that made me jump in was Houdart blaming Scorpio bitbases for his poor implementation. I think we all know that Houdart was just spreading BS.
Houdart's observation was that all TB solutions he had tried so far suffered from a synchronisation bottleneck when using many threads and that my solution does not, or to a much lesser degree.

I have no personal experience with any of the other solutions, but Houdart obviously has. Simply denying that a synchronisation bottleneck exists and calling him a liar is not exactly conducive to a healthy discussion.

The reason that my implementation does well with many threads is that it leaves all the complications to the OS. Current OSes are pretty good at handling many threads.
Daniel Shawul
Posts: 4186
Joined: Tue Mar 14, 2006 11:34 am
Location: Ethiopia

Re: Where are you Houdart?

Post by Daniel Shawul »

syzygy wrote:
Ryan Benitez wrote:The part that made me jump in was Houdart blaming Scorpio bitbases for his poor implementation. I think we all know that Houdart was just spreading BS.
Houdart's observation was that all TB solutions he had tried so far suffered from a synchronisation bottleneck when using many threads and that my solution does not, or to a much lesser degree.

I have no personal experience with any of the other solutions, but Houdart obviously has. Simply denying that a synchronisation bottleneck exists and calling him a liar is not exactly conducive to a healthy discussion.

The reason that my implementation does well with many threads is that it leaves all the complications to the OS. Current OSes are pretty good at handling many threads.
Where is the DATA? You were jumping around when Kai first produced data showing how bad Houdini played with scorpio bitbases.
Why should we believe anything you say now about other bitbases, without any DATA? DATA?DATA?DATA?DATA?DATA?DATA?DATA?DATA?