Google's AlphaGo team has been working on chess

Discussion of chess software programming and technical issues.

Moderators: hgm, Harvey Williamson, bob

Forum rules
This textbox is used to restore diagrams posted with the [d] tag before the upgrade.
User avatar
hgm
Posts: 22337
Joined: Fri Mar 10, 2006 9:06 am
Location: Amsterdam
Full name: H G Muller
Contact:

Re: Google's AlphaGo team has been working on chess

Post by hgm » Tue Dec 12, 2017 10:54 am

Michel wrote:This is a misinterpretation of the meaning of "linear" in this context. Linear means that the evaluation function is linear in the bonus/penalty for such a feature. This is the case in almost all evaluation functions used by chess engines.
So what exactly is the misinterpretation? I did in no way specify how the information about that such features are present or not should be used, and in particular not that it should be used to define an additive bonus/penalty. I proposed to feed it to a NN. Which is as different from a conventional, linear Chess-engine evaluation as day and night. The NN would of course use recognized pin patterns to eliminate the pinned pieces from its SEE calculations, etc. Something no Chess engine seems to do.

User avatar
hgm
Posts: 22337
Joined: Fri Mar 10, 2006 9:06 am
Location: Amsterdam
Full name: H G Muller
Contact:

Re: Google's AlphaGo team has been working on chess

Post by hgm » Tue Dec 12, 2017 10:59 am

Rein Halbersma wrote:The main question is whether it is realistic to assume that it is possible to hand-code (or even automate) a series of chess-knowledge intensive patterns and have a linear combination of those patterns cover 99% of the information in the NN.
I don't even see why this should be a question at all, as the aswer so obviously seems to be a big fat "no". It also seems totally irrelevant. The whole idea of a linear combination makes it an immediate bust. There is nothing linear about Chess tactics. It is all Boolean logic.

Joost Buijs
Posts: 675
Joined: Thu Jul 16, 2009 8:47 am
Location: Almere, The Netherlands

Re: Google's AlphaGo team has been working on chess

Post by Joost Buijs » Tue Dec 12, 2017 1:31 pm

Rein Halbersma wrote: However, in draughts Fabien Letouzey made a very nice contribution by scanning the board (hence his program's name Scan) in 4 x 4 regions (and some larger patterns) and computed the indices of each pattern into a large lookup-table of weights. These weights were trained using logistic regression. So apart from the logistic mapping at the end, a linear combination of patterns and weights. The question is whether something like that works in chess.
Some time ago I started working on a new evaluation function for my chess engine, I decided to use something similar for pawn evaluation with 3 x 6 patterns. It is still in it's infancy but the first results look promising.

Another advantage is that with BMI2 the pawn evaluator runs a lot faster than before. As it seems now I can remove the pawn evaluation cache which will make the program cleaner as well.

I don't know when I will have conclusive results, it will take a few months, right now I'm busy restructuring the whole engine because it became a terrible mess after all the things I tried a couple of years ago.

Rein Halbersma
Posts: 672
Joined: Tue May 22, 2007 9:13 am

Re: Google's AlphaGo team has been working on chess

Post by Rein Halbersma » Tue Dec 12, 2017 1:41 pm

hgm wrote:
Rein Halbersma wrote:The main question is whether it is realistic to assume that it is possible to hand-code (or even automate) a series of chess-knowledge intensive patterns and have a linear combination of those patterns cover 99% of the information in the NN.
I don't even see why this should be a question at all, as the aswer so obviously seems to be a big fat "no". It also seems totally irrelevant. The whole idea of a linear combination makes it an immediate bust. There is nothing linear about Chess tactics. It is all Boolean logic.
OK, so do you think that a Boolean logic function (e.g. a decision tree) can approximate a neural network chess evaluation function without loss in accuracy?

abulmo2
Posts: 131
Joined: Fri Dec 16, 2016 10:04 am
Contact:

Re: Google's AlphaGo team has been working on chess

Post by abulmo2 » Tue Dec 12, 2017 2:17 pm

hgm wrote:There is nothing linear about Chess tactics. It is all Boolean logic.
You can built your eval made of boolean logic like this:
eval = sum x_i * w_i, with x_i in {0,1} and w_i the weight of the feature.

I suppose that most of the current evaluation functions in Chess can be decomposed to fit the above formula which is obviously linear.
Richard Delorme

User avatar
hgm
Posts: 22337
Joined: Fri Mar 10, 2006 9:06 am
Location: Amsterdam
Full name: H G Muller
Contact:

Re: Google's AlphaGo team has been working on chess

Post by hgm » Tue Dec 12, 2017 2:28 pm

As multipliers, adders, memories etc. can all be built from logic gates, the answer is obviously "yes". The AlphaZero NN is a network of logic gates, because the TPUs are nothing but networks of logic gates.

It might still not be the right question, though. The question is how much simpler such a decision tree could be.

Rein Halbersma
Posts: 672
Joined: Tue May 22, 2007 9:13 am

Re: Google's AlphaGo team has been working on chess

Post by Rein Halbersma » Tue Dec 12, 2017 2:48 pm

I am not an expert in either NNs or decision trees. But from what I've read is that NNs are much better at discovering low-dimensional manifolds (say winning chess positions) in high-dimensional data (the space of all chess positions) than other methods, without falling into the trap of over-fitting and sensitivity to input data that e.g. decision trees are prone to. It's not how well a model can fit the current data, but how well it will be able to fit future data -given that it fits the current data- that is important for a good eval.

User avatar
hgm
Posts: 22337
Joined: Fri Mar 10, 2006 9:06 am
Location: Amsterdam
Full name: H G Muller
Contact:

Re: Google's AlphaGo team has been working on chess

Post by hgm » Tue Dec 12, 2017 4:58 pm

Well, better is a relative concept. AlphaZero does 1000 times fewer nps than Stockfish, on vastly more powerful hardware. Even if you forget about the hardware difference: Suppose we would play AlphaZero's NN against Stockfish limited to searching 1000 nodes/move... Who do you think would win that? My money would be on Stockfish. The paper already cofirms that: at faster TC the AlphaZero rating is already below that of Stockfish, and drops far faster.

So it seems alpha-beta search with a hand-coded static evaluation is far better at identifying won Chess positions than the NN.

It is quite possible that in a simple and well-understood game like Chess, a hand-coded move picker, when properly tuned, would also outperform the trained NN. The point is that conventional engines do not contain anything like a move picker, and although millions of games go into tuning Stockfish' evaluation, move picking receives zero attention. The main lesson to draw from this AlphaZero experiment is that highly selective search can be competitive against brute force. So move selection deserves way more attention than we give it, in engine development.

Note that the AlphaZero's NN is trained to predict the visiting frequencies the nodes will get in an MCTS. That doesn't necessarily correspond to the evaluation score of the moves. Moves with complex tactics will need a lot of visits before they are resolved one way or the other, because the MCTS will need to construct a large QS tree. And it is very possible the final outcome is that the move is bad, but you could not have seen that with just a few visits.

Michel
Posts: 1961
Joined: Sun Sep 28, 2008 11:50 pm

Re: Google's AlphaGo team has been working on chess

Post by Michel » Tue Dec 12, 2017 9:18 pm

although millions of games go into tuning Stockfish' evaluation, move picking receives zero attention.
That's not literally true. Stockfish contains a lot of history mechanisms to help it in move selection. This has brought SF a large amount of elo.

SF's "policy" is generated dynamically instead of statically. I am not saying this is better or worse.
Ideas=science. Simplification=engineering.
Without ideas there is nothing to simplify.

User avatar
hgm
Posts: 22337
Joined: Fri Mar 10, 2006 9:06 am
Location: Amsterdam
Full name: H G Muller
Contact:

Re: Google's AlphaGo team has been working on chess

Post by hgm » Tue Dec 12, 2017 9:32 pm

Apparently it is worse. Stockfish needs a thousand times as many nodes in its tree, and then it still loses.

Post Reply