This is not correct.Milos wrote:If chess was a game where time is irrelevant, probably it would be dominated by ANN implementations. However, since time is relevant in chess, having an engine that has a real-time ANN implementation in some its parts would make it ridiculously slow and in the same time useless.
You don't need train your network at game playing time. If you are going to use it as a functional approximator to evaluate the chess board then you should firstly train your network and persist the adapted weights of the neurons to any sort of database. At game playing time you will only load the adapted weights - maybe in your engine's startup routines - and use it later to activate the network for your evaluation function. It's only a matter of compute some float points with matrix operations. With SSE it could be even faster.
IMO the main problem with ANNs and its application with classical chess algorithms is that NOBODY had a good idea about how to do it - so far.
Tunning evaluation terms in chess is just only one approach for using ANNs. There are many other great scopes for applying ANN besides that one.
In addition, KnightCap [http://citeseer.ist.psu.edu/old/400808.html] has used Temporal Difference (a sort of ANN) to tune evaluation terms in chess. The author (Jonathan Baxter) claimed + 400 ELO after tuning KnightCap evaluation terms with TD learning.