In the last times I've done some work on ANN and I wonder if they can be (or actually are being) used in chess engines. I'm not refering here to an ANN that plays chess, but more like using the characteristics of the ANN for recognizing patterns as part(s) of a "normal" engine. For example:
Can an ANN be trained to:
 evaluate pawn structures? (passed, doubled, etc. pawns)
 decide we're either in the opening, middle game or endgame?
 detect weakstrong squares?
 evaluate the king safety?
 ...
Of course, these ideas (or any others) only make sense if these tasks can be achieved accurately and fastly.
(Go first place that I don't mean to invent any new about this stuff, but rather I'm curious about it and this time google hasn't provided me many info about this topic)
Regards
E Diaz
Is there place for neural networks in chess engines?
Moderators: Harvey Williamson, bob, hgm
Forum rules
This textbox is used to restore diagrams posted with the [d] tag before the upgrade.
This textbox is used to restore diagrams posted with the [d] tag before the upgrade.
Re: Is there place for neural networks in chess engines?
You could probably train a neural network to output heuristic values for those things, but I'm 99.99% sure they would have worse performance than existing handcoded heuristics already in use in chess engines. For anything complex with ANN you need lots of neurons and connections between them which results it high overhead.
How did the saying go? Genetic algorithms are the second worst solution to every problem. Neural networks the worst.
How did the saying go? Genetic algorithms are the second worst solution to every problem. Neural networks the worst.
Re: Is there place for neural networks in chess engines?
IIRC Volker Annuss used ANN for time management in Hermann.

 Posts: 173
 Joined: Mon Sep 03, 2007 7:15 am
Re: Is there place for neural networks in chess engines?
There are two old threads about neural networks here.
http://www.talkchess.com/forum/viewtopi ... _view=flat
http://www.talkchess.com/forum/viewtopi ... _view=flat
Neural networks are slow, so it is difficult to get them working competitively. So you can only use them, where speed does not matter, or where you can hash the result.
My engine Hermann uses neural networks in two places.
Material evaluation
It was very complicated to get this working.
As an extreme example of what can go wrong, imagine an endgame where white has a big material advantage and a passed pawn. The neural network has learned, that the material in this position has a high winning probability. It has also learned, that the winning probability is not much higher, when the pawn is promoted.
There was a test version of Hermann, that sometimes did not promote, because that would mean to give up the positional bonus for the passer.
timing
This works much better. In iterative deepening, Hermann uses the properties of the result of the last iterations and the differences between them to get a probability, if the best move will change when going one ply deeper. Based on this it decides to continue or stop the search.
Look into the linked threads for more details.
http://www.talkchess.com/forum/viewtopi ... _view=flat
http://www.talkchess.com/forum/viewtopi ... _view=flat
Neural networks are slow, so it is difficult to get them working competitively. So you can only use them, where speed does not matter, or where you can hash the result.
My engine Hermann uses neural networks in two places.
Material evaluation
It was very complicated to get this working.
As an extreme example of what can go wrong, imagine an endgame where white has a big material advantage and a passed pawn. The neural network has learned, that the material in this position has a high winning probability. It has also learned, that the winning probability is not much higher, when the pawn is promoted.
There was a test version of Hermann, that sometimes did not promote, because that would mean to give up the positional bonus for the passer.
timing
This works much better. In iterative deepening, Hermann uses the properties of the result of the last iterations and the differences between them to get a probability, if the best move will change when going one ply deeper. Based on this it decides to continue or stop the search.
Look into the linked threads for more details.
 stegemma
 Posts: 859
 Joined: Mon Aug 10, 2009 8:05 pm
 Location: Italy
 Full name: Stefano Gemma
 Contact:
Re: Is there place for neural networks in chess engines?
Wow!!! So i'm doing the second worst thing that i can do with my new engine Satana... and i was thinking even about neural networks (but i know genetical algorithm better, so i choose those ones).rbarreira wrote:...How did the saying go? Genetic algorithms are the second worst solution to every problem. Neural networks the worst.
Re: Is there place for neural networks in chess engines?
There is a place for the neural networks. In fact, each positional table is equivalent to a perceptron with linear activation function from the mathematical point of view.
The set of PST is equivalent to a 2level neural network with a single output signal.
The only difference is the way that you have programmed them, because the connections between the layers are currently implemented as procedures.
The rest of the positional weights can be mathematicaly modeled as another perceptron, included in your 2level neural network. Maybe with 3 levels.
So you can for sure implement another procedure to train your "emulated" neural network with backpropagation or other methods to fit what you want.
The only problem is: ¿how to determine the error for backpropagation?
The set of PST is equivalent to a 2level neural network with a single output signal.
The only difference is the way that you have programmed them, because the connections between the layers are currently implemented as procedures.
The rest of the positional weights can be mathematicaly modeled as another perceptron, included in your 2level neural network. Maybe with 3 levels.
So you can for sure implement another procedure to train your "emulated" neural network with backpropagation or other methods to fit what you want.
The only problem is: ¿how to determine the error for backpropagation?