One idea to solve chess?

Discussion of chess software programming and technical issues.

Moderators: hgm, Rebel, chrisw

mario carbonell
Posts: 20
Joined: Tue Dec 12, 2017 7:14 pm

One idea to solve chess?

Post by mario carbonell »

Hi all :D,

Let introduce me, I made a chess program in the 90s and organized one Spanish Computer Chess championship in Valencia, my city. Worked as an AI programmer in the videogame industry some years, and now on the internet sector.

The recent games by DeepMind AlphaZero vs Stockfish has renowned my interest in computer chess and machine learning. I wanted to share and idea with you:

Suppose we could train a neural network with enough capacity to predict with 100% accuracy all 7 or 6 or 5 men endgame tablebases.

That could be an interesting experiment in itself, can we build a neural network that could act as a compression mechanism for some chess tablebase and give exactly the same result?

If the answer is yes, we are no longer restricted by the size of the tablebases and can obtain the same perfect result.

If we could add more men to the tablebases, and compress the perfect data in a neural network, in theory we could to this until we arrive to the 32 final men tablebase.

Sure, this is out of reach of todays technology, but who knows in the future?

What to you think?

Regards,
CheckersGuy
Posts: 273
Joined: Wed Aug 24, 2016 9:49 pm

Re: One idea to solve chess?

Post by CheckersGuy »

mario carbonell wrote:Hi all :D,

Let introduce me, I made a chess program in the 90s and organized one Spanish Computer Chess championship in Valencia, my city. Worked as an AI programmer in the videogame industry some years, and now on the internet sector.

The recent games by DeepMind AlphaZero vs Stockfish has renowned my interest in computer chess and machine learning. I wanted to share and idea with you:

Suppose we could train a neural network with enough capacity to predict with 100% accuracy all 7 or 6 or 5 men endgame tablebases.

That could be an interesting experiment in itself, can we build a neural network that could act as a compression mechanism for some chess tablebase and give exactly the same result?

If the answer is yes, we are no longer restricted by the size of the tablebases and can obtain the same perfect result.

If we could add more men to the tablebases, and compress the perfect data in a neural network, in theory we could to this until we arrive to the 32 final men tablebase.

Sure, this is out of reach of todays technology, but who knows in the future?

What to you think?

Regards,
Suppose you had a neural network that would play perfectly in 7 (or less ) piece engames. How does one obtain a neural network that plays perfect in endgames with more than 7 pieces ?
How does one verify that the neural network did indeed play perfectly ? The only thing I can think of is having the 8 (or more) endgame database in the first place which would defeat it's purpose.
User avatar
Evert
Posts: 2929
Joined: Sat Jan 22, 2011 12:42 am
Location: NL

Re: One idea to solve chess?

Post by Evert »

I guess the first thing to qualify is what you mean by 100% accuracy of 7/6/5 men positions. Do you mean the NN reproduces optimal play (according to some metric), do you mean that you get the correct game-theoretic result when playing out the tablebase positions with the NN, or do you mean that the NN correctly classifies the positions?

I think the latter is unlikely to reduce storage space, as is the first. The second might work, but doesn't necessarily tell you much.

The question then is where you'd get the data to train the NN for 8-men positions, and last (but not least) where you'd get the time required to train the NN.
Dirt
Posts: 2851
Joined: Wed Mar 08, 2006 10:01 pm
Location: Irvine, CA, USA

Re: One idea to solve chess?

Post by Dirt »

Neural nets work well to give very good approximate answers to problems. They are bad at giving exact answers, like tablebases do.
Deasil is the right way to go.
mario carbonell
Posts: 20
Joined: Tue Dec 12, 2017 7:14 pm

Re: One idea to solve chess?

Post by mario carbonell »

CheckersGuy wrote: Suppose you had a neural network that would play perfectly in 7 (or less ) piece engames. How does one obtain a neural network that plays perfect in endgames with more than 7 pieces ?
How does one verify that the neural network did indeed play perfectly ? The only thing I can think of is having the 8 (or more) endgame database in the first place which would defeat it's purpose.
The trick is how the tablebases are constructed now, and is by retrograde analysis. If you have a 7 men oracle, you can have have the solution of some 8 men by the technique used now, and continuing with the retrograde analysis add more and more positions to train the network. So in theory you don't need the complete 8 men tablebase do train network because you obtain the positions and the solutions with starting the 7 men.

Evert wrote:I guess the first thing to qualify is what you mean by 100% accuracy of 7/6/5 men positions. Do you mean the NN reproduces optimal play (according to some metric), do you mean that you get the correct game-theoretic result when playing out the tablebase positions with the NN, or do you mean that the NN correctly classifies the positions?

I think the latter is unlikely to reduce storage space, as is the first. The second might work, but doesn't necessarily tell you much.

The question then is where you'd get the data to train the NN for 8-men positions, and last (but not least) where you'd get the time required to train the NN.
I would dare to say a network that could give you the DTM, distance to mate for every position. Yes, I know that sounds unrealistic, but who knows the amount of IQ that the network could have?

For the 8 men positions, the same as before, starting with 7 men and retrograde analysis.

For the time needed, a lot of course.

I wanted to share the idea as food for thought, not necessarily implying that is is possible but as interesting theoretical idea to think about.

Dirt wrote:Neural nets work well to give very good approximate answers to problems. They are bad at giving exact answers, like tablebases do.
That is also interesting. Can we build a network that gives exact answers of chess positions? Could be useful a network that gives approximate answers and not the exact ones?
CheckersGuy
Posts: 273
Joined: Wed Aug 24, 2016 9:49 pm

Re: One idea to solve chess?

Post by CheckersGuy »

Let's say it was possible. I doubt that one could store the number of weights needed for that network because I dont think current networks arelarge enough to give a perfect answer to any chess position
mario carbonell
Posts: 20
Joined: Tue Dec 12, 2017 7:14 pm

Re: One idea to solve chess?

Post by mario carbonell »

CheckersGuy wrote:Let's say it was possible. I doubt that one could store the number of weights needed for that network because I dont think current networks arelarge enough to give a perfect answer to any chess position
Has anyone tried to compress the data of a 3 men tablebase in a NN and get the same result? That is an interesting experiment, first if it is possible, and second the reduction in size.
CheckersGuy
Posts: 273
Joined: Wed Aug 24, 2016 9:49 pm

Re: One idea to solve chess?

Post by CheckersGuy »

mario carbonell wrote:
CheckersGuy wrote:Let's say it was possible. I doubt that one could store the number of weights needed for that network because I dont think current networks arelarge enough to give a perfect answer to any chess position
Has anyone tried to compress the data of a 3 men tablebase in a NN and get the same result? That is an interesting experiment, first if it is possible, and second the reduction in size.
Would certainly be very intresting as neural networks do get used in some compression algorithms. However, those compression algorithms tend to be much slower and I guess, if it was possible, stockfish and other engines would not use this new compression algorithm during search.
syzygy
Posts: 5557
Joined: Tue Feb 28, 2012 11:56 pm

Re: One idea to solve chess?

Post by syzygy »

mario carbonell wrote:
CheckersGuy wrote:Let's say it was possible. I doubt that one could store the number of weights needed for that network because I dont think current networks arelarge enough to give a perfect answer to any chess position
Has anyone tried to compress the data of a 3 men tablebase in a NN and get the same result? That is an interesting experiment, first if it is possible, and second the reduction in size.
A neural network is not a piece of magic. You can make it learn any position, but you cannot stop it from forgetting the positions it learned before.

With 3 pieces you can get away with general rules if you don't care about exact distances, so that may work fine. With a couple more pieces there is simply no compact set of rules that could describe the outcome of all positions, let alone rules that you could expect an NN to learn.

For best TB compression, just throw away as much information as you are willing to recompute during probing, then compress what is left with a suitable compressor. There is no way that an NN could "learn" that data more efficiently. An NN does not involve magic.
AlvaroBegue
Posts: 931
Joined: Tue Mar 09, 2010 3:46 pm
Location: New York
Full name: Álvaro Begué (RuyDos)

Re: One idea to solve chess?

Post by AlvaroBegue »

syzygy wrote:For best TB compression, just throw away as much information as you are willing to recompute during probing, then compress what is left with a suitable compressor. There is no way that an NN could "learn" that data more efficiently. An NN does not involve magic.
There are ways in which a NN can help in compressing EGTBs, without involving magic. For instance, try to train the NN to predict WDL. Then store in a table the list of positions that are misclassified by the NN. If the NN does a decent job, this could result in a much compressed description of the EGTB.

I can think of another method that is mathematically elegant and should compress a lot, but it's impractical: Train the network as a classifier using entropy as the loss function, then use arithmetic encoding of the true WDL values given the probability model from evaluating the NN at each position. The impracticality comes from the fact that you would need to evaluate the NN on many positions to extract a single value. There might be some small tweak that could make this practical, but I can't think of anything right now.