Solving Chess Kickstarter

Discussion of chess software programming and technical issues.

Moderators: hgm, Rebel, chrisw

syzygy
Posts: 5557
Joined: Tue Feb 28, 2012 11:56 pm

Re: Solving Chess Kickstarter

Post by syzygy »

Uri Blass wrote:The project is of course nonsense but I am not sure that it is impossible to solve chess.
The problem is the branching factor of chess. Your point 2) is valid, but the number of positions you do have to look at is still too large.

Point 1) has little to do with solving chess.

Point 3)... good luck with finding rules.
Dirt
Posts: 2851
Joined: Wed Mar 08, 2006 10:01 pm
Location: Irvine, CA, USA

Re: Solving Chess Kickstarter

Post by Dirt »

Uri Blass wrote:The project is of course nonsense but I am not sure that it is impossible to solve chess.
I agree, but I don't think it can be solved this century. I doubt it ever will be solved, whether we can or not. Only so much effort should be put into solving a game.
Sven
Posts: 4052
Joined: Thu May 15, 2008 9:57 pm
Location: Berlin, Germany
Full name: Sven Schüle

Re: Solving Chess Kickstarter

Post by Sven »

Dirt wrote:
Uri Blass wrote:The project is of course nonsense but I am not sure that it is impossible to solve chess.
I agree, but I don't think it can be solved this century. I doubt it ever will be solved, whether we can or not. Only so much effort should be put into solving a game.
Fully agreed as well. Let me add a few thoughts on it.

- IF the number of legal chess positions (NULP) is like O(10^46), and

- IF only 1% of these were actually reachable from the initial position of standard chess (but I doubt that, I believe it is more like 10-20%), and

- IF we had K computers available (let's call them "cluster nodes" in this context) where each cluster node gets a different opening position to solve (comparable to "distributed perft") and has "a lot of cores", and

- IF each cluster node can perform a fast state-of-the-art optimized parallel alpha-beta search with, say, 100 Mnps and only needs the theoretical minimum of nodes given by alpha-beta as 2 * sqrt(N) (which is hard to achieve when no "unsafe" pruning or reduction methods are available as in this case), and

- IF we can find phantastic speedup methods on the global level (not local for a cluster node) like a world-wide distributed hash table or an efficient algorithm to reuse cluster nodes for remaining work if they finish earlier than others, that are good for a total speedup factor of, say, 100, and

- IF we have a perfectly working replacement concept to replace defective cluster nodes on the fly and immediately resume work without any significant loss of information, and

- IF there are enough money and other resources around to do all as written above,

THEN the total time T in years until we are done derives from the number of cluster nodes K as follows:

One year has about 3 * 10^7 seconds, so one cluster node with 100 Mnps would be able to search 3 * 10^15 positions per year which would cover 4.5 * 10^30 positions per year based on the "perfect alpha-beta search" assumption. So for 10^44 reachable positions and with my mystical total speedup factor of 100 we would have something like

T = (10^42) / (K * 4.5 * 10^30) = 2.222 * 10^11 / K

so that we could expect a solution within T = 80 years for K = 2.5 * 10^9 cluster nodes.

With a lot of IF's. And even if someone comes up with another mystical speedup factor of 2500 then it's still 10^6 cluster nodes calculating day by day, 24/7 for 80 years, while also wasting LOTS of energy and money.

A more realistic formula with less "mystical" assumptions might be somewhere around this:

T = 10^14 / K

In my calculations above the major limiting parts seem to be
1) the required hardware and
2) the assumption that there is nothing better than "perfect alpha-beta" to address the problem.

Obviously we don't know today how 1) will evolve in future but we may well assume that hardware improvements are good for a small constant factor at most, like factor 2 every N years.

As to 2), as long as there is no fundamentally better approach than alpha-beta for searching a chess tree this will remain the key argument why solving chess most probably will not happen within the next few centuries at least.

Sven
Uri Blass
Posts: 10267
Joined: Thu Mar 09, 2006 12:37 am
Location: Tel-Aviv Israel

Re: Solving Chess Kickstarter

Post by Uri Blass »

I think clearly less than 1% of the positions are reachable assuming perfect play by one side.

It is possible to prune obviously illogical moves in analysis and there are many obviously illogical move

1.e4 Nf6 2.Qh5 is losing for white and I feel 100% sure about it.
No need to analyze to be sure that 2...Nxh5 is a winning for black.
Ferdy
Posts: 4833
Joined: Sun Aug 10, 2008 3:15 pm
Location: Philippines

Re: Solving Chess Kickstarter

Post by Ferdy »

DustinYoder wrote:I wanted to announce that I have started a kick starter project to build and test a database structure to handle solving chess. I have discussed solving chess here and I think the community might be interested to read the details here https://www.kickstarter.com/projects/12 ... tep-1-of-3 I would request comments and advice on other outlets who might be interested in this project.
An existing system from chess Aquarium GUI called IDEA has good potential of solving chess. That GUI can also show end results of 7-men positions. As the engine advances, IDEA system also advances.
syzygy
Posts: 5557
Joined: Tue Feb 28, 2012 11:56 pm

Re: Solving Chess Kickstarter

Post by syzygy »

Uri Blass wrote:I think clearly less than 1% of the positions are reachable assuming perfect play by one side.
It is far and far less than 1%! But don't you realise that "all positions" or "1% of all positons" makes no practical difference at all, since both numbers are way way way beyond what is possible?

With minimax you'd have to search say 30^120 positions.
With alpha-beta AND always selecting the best move first it is "just" in the order of 30^60 positions.
The difference is immense.
Still, 30^60 is way too much, and most of those 60-move lines would still need considerable effort to "solve". And transpositions aren't going to help too much here, as there really aren't many transpositions in chess that bring unrelated lines back together. Not that you could have a transposition table that could store all those positions... (and just imagine the massive graph history interaction problems that would arise when trying to prove chess a draw).
1.e4 Nf6 2.Qh5 is losing for white and I feel 100% sure about it.
No need to analyze to be sure that 2...Nxh5 is a winning for black.
You are not talking about solving chess here.

But anyway, if you're talking about proving that white can win or at least draw, you can indeed alpha-beta prune 2.Qh5, and this is accounted for in the 30^60 number.

If you want to prove chess is a draw, you can probably alpha-beta prune 1...Nf6. Again, this is accounted for in the 30^60 number. But here you can already make a mistake. Maybe 1...e5 or so turns out to be losing and then you have to do another 30^60 tree for 1...Nf6 or some other move. The 30^60 assumes you can just pick a "best" move in each node, whch in practice is again completely impossible. So 30^60 is an unreachable lower bound.
Sven
Posts: 4052
Joined: Thu May 15, 2008 9:57 pm
Location: Berlin, Germany
Full name: Sven Schüle

Re: Solving Chess Kickstarter

Post by Sven »

Uri Blass wrote:I think clearly less than 1% of the positions are reachable assuming perfect play by one side.
This argument does not make sense in the context of "solving chess". If you know the "perfect" moves in advance then the game is already solved. Since it isn't, you can't exclude imperfect moves without knowing which ones are perfect.
Uri Blass wrote:It is possible to prune obviously illogical moves in analysis and there are many obviously illogical move

1.e4 Nf6 2.Qh5 is losing for white and I feel 100% sure about it.
No need to analyze to be sure that 2...Nxh5 is a winning for black.
Sure, in almost every chess position there are a couple of obvious blunders that could be skipped or at least "postponed". How many are there? How many legal moves after 1.e4 Nf6 are of that kind? I see only three: 2.Qh5, 2.Qg4, and 2.Ba6, that's 3 of 30 or 10%. But by how much does it reduce the overall search space? You don't know it in advance since you don't know how many positions that could be reached after moves like 1.e4 Nf6 2.Qh5 Nxh5 can only be reached through that move sequence. Ok: you slightly reduce the branching factor. But it does not help much in practice.
Rein Halbersma
Posts: 741
Joined: Tue May 22, 2007 11:13 am

Re: Solving Chess Kickstarter

Post by Rein Halbersma »

Sven Schüle wrote: As to 2), as long as there is no fundamentally better approach than alpha-beta for searching a chess tree this will remain the key argument why solving chess most probably will not happen within the next few centuries at least.

Sven
Here's another ballpark try.

The Chinook paper "Checkers is Solved" contains some interesting calculations. For checkers, the state space is 10^20 and the solution tree involved 10^14 positions. This was reached through a combination of a forward search and a backward search using endgame databases. The search tree space of checkers is around 10^40. Both in state space and search tree space, checkers is roughly the square of checkers.

Extrapolating this to chess, with the big assumption that the same scaling holds, then a state space for chess of 10^44 and a search tree space of 10^80 gives a solution space of 10^28 positions. Clearly this requires both forward and backward searches.

So 10^14 more resources than the Chinook project could give a shot at solving chess. Extrapolating Moore's Law: 45 doublings so not before the end of this century, unless someone considers this a matter of national security or SETI like proportions ;-)
mvk
Posts: 589
Joined: Tue Jun 04, 2013 10:15 pm

Re: Solving Chess Kickstarter

Post by mvk »

Rein Halbersma wrote:So 10^14 more resources than the Chinook project could give a shot at solving chess. Extrapolating Moore's Law: 45 doublings so not before the end of this century, unless someone considers this a matter of national security or SETI like proportions ;-)
Brace for impact, because the slowdown has already started. We're going to have a real hard time to see the next 1 or 2 doublings in reasonable time (if at all without immensely raised costs). Let alone 45 of them.
[Account deleted]
duncan
Posts: 12038
Joined: Mon Jul 07, 2008 10:50 pm

Re: Solving Chess Kickstarter

Post by duncan »

mvk wrote:
Rein Halbersma wrote:So 10^14 more resources than the Chinook project could give a shot at solving chess. Extrapolating Moore's Law: 45 doublings so not before the end of this century, unless someone considers this a matter of national security or SETI like proportions ;-)
Brace for impact, because the slowdown has already started. We're going to have a real hard time to see the next 1 or 2 doublings in reasonable time (if at all without immensely raised costs). Let alone 45 of them.

http://www.extremetech.com/computing/17 ... eyond-14nm


Moore’s law is no longer expected to deliver improved transistor cost scaling at or below the 20nm node. It’s important to understand how that plays into concerns about EUV and 450mm wafers put together.

otoh

http://www.reuters.com/article/2014/05/ ... G520140505


the ministry said it promised Intel it would work to expedite bureaucratic procedures as much as possible to develop and promote employment in the country's south.

As part of the investment, Intel would receive a government grant that Israeli media estimated at about 1 billion shekels.

They also reported that Intel would add about 1,200 new jobs.

It is widely believed that Intel's investment is aimed at shifting to new 10 nanometre technology.

A company spokesman declined to comment but Intel in January had said it would decide on the location of a 10 nanometre plant this year. Israel was one of a number of countries competing to host the new plant.