One idea to solve chess?

Discussion of chess software programming and technical issues.

Moderators: hgm, Rebel, chrisw

syzygy
Posts: 5557
Joined: Tue Feb 28, 2012 11:56 pm

Re: One idea to solve chess?

Post by syzygy »

AlvaroBegue wrote:
syzygy wrote:For best TB compression, just throw away as much information as you are willing to recompute during probing, then compress what is left with a suitable compressor. There is no way that an NN could "learn" that data more efficiently. An NN does not involve magic.
There are ways in which a NN can help in compressing EGTBs, without involving magic. For instance, try to train the NN to predict WDL. Then store in a table the list of positions that are misclassified by the NN. If the NN does a decent job, this could result in a much compressed description of the EGTB.
Yes, but I don't really believe in it, not even for WDL. I think by far the best predictor is the material balance combined with a low-depth search to weed out immediate tactics (*). The material-balance factor will be taken care of by the compression algorithm (if white wins almost all positions, then all those very long strings of white wins will compress to almost nothing). Immediate tactics can be recomputed cheaply during probing (compared to running an NN).

But even if an NN helped (e.g. to find some structure in the data after removing the immediate tactics), you would need to run it on every TB position, which is simply too expensive (and then I ignore the time needed for training the NN). (The low-depth searches are much cheaper and don't even have to be run before compression if the TB generator somehow marks the positions affected by low-depth tactics during generaton.)

Feel free to prove me wrong on 5-piece tables. :-)

(*) There are some other useful predictors like pawn advancement and bishop color. They can be dealt with by picking a good order of the pieces when calculating the index or dividing the table in separately compressible parts.
Michael Sherwin
Posts: 3196
Joined: Fri May 26, 2006 3:00 am
Location: WY, USA
Full name: Michael Sherwin

Re: One idea to solve chess?

Post by Michael Sherwin »

mario carbonell wrote:Hi all :D,

Let introduce me, I made a chess program in the 90s and organized one Spanish Computer Chess championship in Valencia, my city. Worked as an AI programmer in the videogame industry some years, and now on the internet sector.

The recent games by DeepMind AlphaZero vs Stockfish has renowned my interest in computer chess and machine learning. I wanted to share and idea with you:

Suppose we could train a neural network with enough capacity to predict with 100% accuracy all 7 or 6 or 5 men endgame tablebases.

That could be an interesting experiment in itself, can we build a neural network that could act as a compression mechanism for some chess tablebase and give exactly the same result?

If the answer is yes, we are no longer restricted by the size of the tablebases and can obtain the same perfect result.

If we could add more men to the tablebases, and compress the perfect data in a neural network, in theory we could to this until we arrive to the 32 final men tablebase.

Sure, this is out of reach of todays technology, but who knows in the future?

What to you think?

Regards,
The perfect solution to solving chess is of course storing the entire chess tree. But, that is not only impractical it is impossible. However, anything short of that is not really solving chess as there is no way to prove the accuracy of the solution. So any attempt to solve chess has by its nature to be defined as something less than a true solution. And how is a consensus on the definition going to be agreed on? Impossible!

So we are talking a partial solution if we are really talking a solution at all. Memory while plentiful in this modern era is not infinite. Neural networks are memory intensive and given a specified amount of memory to use will never be able to use that memory as efficiently as traditional algorithmic methods.

So any solution based on NN will have to be considered good enough for the goal that meets the definition agreed upon by whoever subscribes to that definition.

Storing the tree even if it is not complete is the best solution whether NN is used or more traditional methods are used. Using NN to decide which moves to keep in the tree and discarding those moves that can be perfectly refuted by the chess playing algorithm may be a superior way to go.
If you are on a sidewalk and the covid goes beep beep
Just step aside or you might have a bit of heat
Covid covid runs through the town all day
Can the people ever change their ways
Sherwin the covid's after you
Sherwin if it catches you you're through
Sergei S. Markoff
Posts: 227
Joined: Mon Sep 12, 2011 11:27 pm
Location: Moscow, Russia

Re: One idea to solve chess?

Post by Sergei S. Markoff »

mario carbonell wrote: Wed Dec 13, 2017 8:12 pm Let introduce me, I made a chess program in the 90s and organized one Spanish Computer Chess championship in Valencia, my city. Worked as an AI programmer in the videogame industry some years, and now on the internet sector.
It looks, I've came the same idea :)

See viewtopic.php?f=7&t=70723

Everthing depends on the accuracy of the network, at least I think it worth trying. We will not solve chess, of course, but I hope this approach can drastically improve near-endgame play of NN-based engines.
The Force Be With You!
DustyMonkey
Posts: 61
Joined: Wed Feb 19, 2014 10:11 pm

Re: One idea to solve chess?

Post by DustyMonkey »

EGTB's are desirable because they are FAST. The classic Time vs Space tradeoff.

NN's are not even close to optimal models of any domain. Not even close. Keep that in mind, that its not even close. Let me say it again, that it is not even close. Very far from optimal. So very far. Let me say it again, that they are very far from optimal. This cannot be stressed enough. You would be fighting against information theory here. A generalized model will never be superior in time or space to a specific model.

Now that that is on the table, "to get extra TB compression" isnt one of the reason to consider training a NN on endgames.

To get a feel for why this is, consider and construct one of the simplest NNs that solves a "hard problem" (non-linear) .. The lowly XOR operation, which in lookup table form is exactly 4 bits.

The structure (not the weights specific values, or which activation function you chose) of the xor NN you just constructed is a general solver that can solve any 2 input 1 output problem, and that is its problem, that it can solve any of them. This ability to do it all must manifest itself as a space cost. Claude Shannon does not buy free lunches.
Sergei S. Markoff
Posts: 227
Joined: Mon Sep 12, 2011 11:27 pm
Location: Moscow, Russia

Re: One idea to solve chess?

Post by Sergei S. Markoff »

DustyMonkey wrote: Mon May 13, 2019 9:49 am EGTB's are desirable because they are FAST. The classic Time vs Space tradeoff.

NN's are not even close to optimal models of any domain. Not even close. Keep that in mind, that its not even close. Let me say it again, that it is not even close. Very far from optimal. So very far. Let me say it again, that they are very far from optimal. This cannot be stressed enough. You would be fighting against information theory here. A generalized model will never be superior in time or space to a specific model.

Now that that is on the table, "to get extra TB compression" isnt one of the reason to consider training a NN on endgames.

To get a feel for why this is, consider and construct one of the simplest NNs that solves a "hard problem" (non-linear) .. The lowly XOR operation, which in lookup table form is exactly 4 bits.

The structure (not the weights specific values, or which activation function you chose) of the xor NN you just constructed is a general solver that can solve any 2 input 1 output problem, and that is its problem, that it can solve any of them. This ability to do it all must manifest itself as a space cost. Claude Shannon does not buy free lunches.
I'm not going to get free lunch or fight information theory in other way. That's only about compression, sometimes lossy. It will not solve exptime task in polynomial time, but could provide better time/precision tradeoff than current approaches.
The Force Be With You!
Leo
Posts: 1078
Joined: Fri Sep 16, 2016 6:55 pm
Location: USA/Minnesota
Full name: Leo Anger

Re: One idea to solve chess?

Post by Leo »

"Unfortunately the size of 8-man tablebases will be 100 times larger than the size of 7-man tablebases. To fully compute them, one will need about 10 PB (10,000 TB) of disk space and 50 TB of RAM. Only the top 10 supercomputers can solve the 8-man problem in 2014. The first 1000-move mate is unlikely to be found until 2020 when a part of a TOP100 supercomputer may be allowed to be used for solving this task."
Advanced Micro Devices fan.
chrisw
Posts: 4313
Joined: Tue Apr 03, 2012 4:28 pm

Re: One idea to solve chess?

Post by chrisw »

syzygy wrote: Wed Dec 13, 2017 10:26 pm
mario carbonell wrote:
CheckersGuy wrote:Let's say it was possible. I doubt that one could store the number of weights needed for that network because I dont think current networks arelarge enough to give a perfect answer to any chess position
Has anyone tried to compress the data of a 3 men tablebase in a NN and get the same result? That is an interesting experiment, first if it is possible, and second the reduction in size.
A neural network is not a piece of magic.
there probably is a magic neural network for 8,9,10 and so on egtbs, with the weights set to just the right magic values. There might even be an infinite number of such networks. Problem actually is that (most likely) none can be found by error back propagation algorithm.
You can make it learn any position, but you cannot stop it from forgetting the positions it learned before.

With 3 pieces you can get away with general rules if you don't care about exact distances, so that may work fine. With a couple more pieces there is simply no compact set of rules that could describe the outcome of all positions, let alone rules that you could expect an NN to learn.

For best TB compression, just throw away as much information as you are willing to recompute during probing, then compress what is left with a suitable compressor. There is no way that an NN could "learn" that data more efficiently. An NN does not involve magic.
Uri Blass
Posts: 10267
Joined: Thu Mar 09, 2006 12:37 am
Location: Tel-Aviv Israel

Re: One idea to solve chess?

Post by Uri Blass »

mario carbonell wrote: Wed Dec 13, 2017 8:12 pm Hi all :D,

Let introduce me, I made a chess program in the 90s and organized one Spanish Computer Chess championship in Valencia, my city. Worked as an AI programmer in the videogame industry some years, and now on the internet sector.

The recent games by DeepMind AlphaZero vs Stockfish has renowned my interest in computer chess and machine learning. I wanted to share and idea with you:

Suppose we could train a neural network with enough capacity to predict with 100% accuracy all 7 or 6 or 5 men endgame tablebases.

That could be an interesting experiment in itself, can we build a neural network that could act as a compression mechanism for some chess tablebase and give exactly the same result?

If the answer is yes, we are no longer restricted by the size of the tablebases and can obtain the same perfect result.

If we could add more men to the tablebases, and compress the perfect data in a neural network, in theory we could to this until we arrive to the 32 final men tablebase.

Sure, this is out of reach of todays technology, but who knows in the future?

What to you think?

Regards,
I think that if you want to solve chess then you should try to solve.
chess engines of today do not try to solve the game and when they say 0.00 people have no idea if they found a forced draw or simply found a line that they evaluate as equal.

solving means giving me at least one of the following evaluations:
1)side to move has a forced mate
2)side to move has a forced draw by a non empty set of moves S
3)The opponent has a forced mate
4)The opponent has a forced draw.
I wrote at least because both 2 and 4 may be correct.

If both 2 and 4 are correct the engines should choose a move inside S immidiately and save time for the future of the game.

Note that I would like see engines report during the game that they simply solved the game and it is a draw because both 2 and 4 are correct(note that the game may continue because the opponent can blunder so saying it is a draw does not finish the game)
chrisw
Posts: 4313
Joined: Tue Apr 03, 2012 4:28 pm

Re: One idea to solve chess?

Post by chrisw »

Uri Blass wrote: Fri May 17, 2019 7:37 pm
mario carbonell wrote: Wed Dec 13, 2017 8:12 pm Hi all :D,

Let introduce me, I made a chess program in the 90s and organized one Spanish Computer Chess championship in Valencia, my city. Worked as an AI programmer in the videogame industry some years, and now on the internet sector.

The recent games by DeepMind AlphaZero vs Stockfish has renowned my interest in computer chess and machine learning. I wanted to share and idea with you:

Suppose we could train a neural network with enough capacity to predict with 100% accuracy all 7 or 6 or 5 men endgame tablebases.

That could be an interesting experiment in itself, can we build a neural network that could act as a compression mechanism for some chess tablebase and give exactly the same result?

If the answer is yes, we are no longer restricted by the size of the tablebases and can obtain the same perfect result.

If we could add more men to the tablebases, and compress the perfect data in a neural network, in theory we could to this until we arrive to the 32 final men tablebase.

Sure, this is out of reach of todays technology, but who knows in the future?

What to you think?

Regards,
I think that if you want to solve chess then you should try to solve.
chess engines of today do not try to solve the game and when they say 0.00 people have no idea if they found a forced draw or simply found a line that they evaluate as equal.
That’s easily dealt with. Use bit zero to distinguish between score by real draw and evaluation equality. But, while the leaf end of PV evaluation is given by the search as the root position evaluation, it isn’t. So not really very useful anyway.

solving means giving me at least one of the following evaluations:
1)side to move has a forced mate
2)side to move has a forced draw by a non empty set of moves S
3)The opponent has a forced mate
4)The opponent has a forced draw.
I wrote at least because both 2 and 4 may be correct.

If both 2 and 4 are correct the engines should choose a move inside S immidiately and save time for the future of the game.

Note that I would like see engines report during the game that they simply solved the game and it is a draw because both 2 and 4 are correct(note that the game may continue because the opponent can blunder so saying it is a draw does not finish the game)
If one side can force a win, and knows it, then it’s a win, whatever the opponent decides.
If both sides can force a draw, and know it, whether it’s a draw or not depends on player intentions and mutual agreement. The engines are not in cooperative communication with each other and 2 and 4 thus don’t equal draw,
Uri Blass
Posts: 10267
Joined: Thu Mar 09, 2006 12:37 am
Location: Tel-Aviv Israel

Re: One idea to solve chess?

Post by Uri Blass »

chrisw wrote: Fri May 17, 2019 10:05 pm
Uri Blass wrote: Fri May 17, 2019 7:37 pm
mario carbonell wrote: Wed Dec 13, 2017 8:12 pm Hi all :D,

Let introduce me, I made a chess program in the 90s and organized one Spanish Computer Chess championship in Valencia, my city. Worked as an AI programmer in the videogame industry some years, and now on the internet sector.

The recent games by DeepMind AlphaZero vs Stockfish has renowned my interest in computer chess and machine learning. I wanted to share and idea with you:

Suppose we could train a neural network with enough capacity to predict with 100% accuracy all 7 or 6 or 5 men endgame tablebases.

That could be an interesting experiment in itself, can we build a neural network that could act as a compression mechanism for some chess tablebase and give exactly the same result?

If the answer is yes, we are no longer restricted by the size of the tablebases and can obtain the same perfect result.

If we could add more men to the tablebases, and compress the perfect data in a neural network, in theory we could to this until we arrive to the 32 final men tablebase.

Sure, this is out of reach of todays technology, but who knows in the future?

What to you think?

Regards,
I think that if you want to solve chess then you should try to solve.
chess engines of today do not try to solve the game and when they say 0.00 people have no idea if they found a forced draw or simply found a line that they evaluate as equal.
That’s easily dealt with. Use bit zero to distinguish between score by real draw and evaluation equality. But, while the leaf end of PV evaluation is given by the search as the root position evaluation, it isn’t. So not really very useful anyway.

solving means giving me at least one of the following evaluations:
1)side to move has a forced mate
2)side to move has a forced draw by a non empty set of moves S
3)The opponent has a forced mate
4)The opponent has a forced draw.
I wrote at least because both 2 and 4 may be correct.

If both 2 and 4 are correct the engines should choose a move inside S immidiately and save time for the future of the game.

Note that I would like see engines report during the game that they simply solved the game and it is a draw because both 2 and 4 are correct(note that the game may continue because the opponent can blunder so saying it is a draw does not finish the game)
If one side can force a win, and knows it, then it’s a win, whatever the opponent decides.
If both sides can force a draw, and know it, whether it’s a draw or not depends on player intentions and mutual agreement. The engines are not in cooperative communication with each other and 2 and 4 thus don’t equal draw,
You are correct that even if both sides know that both sides can force a draw it may not be a draw because a player may play a move that allow the opponent to force a draw or to try to win when the intention is that the opponent will try to win and lose.

Inspite of it I think that deciding to play a drawing move if both 2 and 4 are correct is the simplest solution.
Advantages are
1)Saving time on the clock for the case that the opponent blunder later so you may have enough time to find a win after the opponent is going to blunder.
2)Not taking risks by playing a move that you do not know that you can force a draw and maybe is a losing move.

The only possible disadvantage is that you may have a practical better move that set a trap for the opponent that you may miss if your search in some way consider the probability of the opponent to go wrong.

For most engines this disadvantage is not relevant.