Hi Gerd,
For what it's worth, I have talked with Gian-Carlo Pascutto in Amsterdam, and he was telling everyone about his succesful machine-learning experiments. If I remember correctly, he used no hidden layer. Evaluation function was a weighted sum of features, and the learning algorithm trained the weights. Hey, but you were there too! He probably told you already.
If you wish to study neural networks, the very best book is that of Chris Bishop:
http://research.microsoft.com/~cmbishop/nnpr.htm
Regarding your question about network architecture, I would say it might not be such a good idea to really use a network. Neural-network algorithms are in fact, more generally, methods to tune the parameters of a function approximator. So, if you write an evaluation functions that computes a value from the position + parameters, you can use those machine-learning algorithms to tune the parameters directly, without really having a network that looks like a multi-layer perceptron. The only property that you need is that you should be able to compute the derivative of the evaluation with respect to parameters, and maybe second-order derivatives too.
Good luck with your experiments.
Rémi
designing neural networks
Moderator: Ras
-
- Posts: 2251
- Joined: Wed Mar 08, 2006 8:47 pm
- Location: Hattingen, Germany
Re: designing neural networks
Hi Rémi,Rémi Coulom wrote:Hi Gerd,
For what it's worth, I have talked with Gian-Carlo Pascutto in Amsterdam, and he was telling everyone about his succesful machine-learning experiments. If I remember correctly, he used no hidden layer. Evaluation function was a weighted sum of features, and the learning algorithm trained the weights. Hey, but you were there too! He probably told you already.
If you wish to study neural networks, the very best book is that of Chris Bishop:
http://research.microsoft.com/~cmbishop/nnpr.htm
Regarding your question about network architecture, I would say it might not be such a good idea to really use a network. Neural-network algorithms are in fact, more generally, methods to tune the parameters of a function approximator. So, if you write an evaluation functions that computes a value from the position + parameters, you can use those machine-learning algorithms to tune the parameters directly, without really having a network that looks like a multi-layer perceptron. The only property that you need is that you should be able to compute the derivative of the evaluation with respect to parameters, and maybe second-order derivatives too.
Good luck with your experiments.
Rémi
yes, I know Gian-Carlo is quite convinced with his learning approach. One further reason I am interested in. I am not aware on implementation details though. I thought the hidden layer was somehow required for some learning algorithms. What do you think obout using an explicit sigmoid function e.g. to scale endgame-middlegame terms on sum of material?
Thanks for your wishes,
Same for you as well.
Gerd
-
- Posts: 438
- Joined: Mon Apr 24, 2006 8:06 pm
Re: designing neural networks
For endgame you would write something likeGerd Isenberg wrote:Hi Rémi,
yes, I know Gian-Carlo is quite convinced with his learning approach. One further reason I am interested in. I am not aware on implementation details though. I thought the hidden layer was somehow required for some learning algorithms. What do you think obout using an explicit sigmoid function e.g. to scale endgame-middlegame terms on sum of material?
Code: Select all
V += EndGameTerm * sigmoid((Phase - Threshold) * Steepness)
This should have very little cost in terms of computation time. If you wish to avoid the cost of floating point operations, once the parameters have been tuned, maybe you can create a lookup table indexed by Phase. But you probably know the optimization tricks much better than I do

Rémi