AlphaZero

Discussion of chess software programming and technical issues.

Moderators: hgm, Rebel, chrisw

User avatar
maksimKorzh
Posts: 771
Joined: Sat Sep 08, 2018 5:37 pm
Location: Ukraine
Full name: Maksim Korzh

Re: AlphaZero

Post by maksimKorzh »

thomasahle wrote: Tue May 05, 2020 10:51 am Check outthe Fastchess of you're interested: https://github.com/thomasahle/fastchess . It's a python implementation of the MCTS approach in the Alpha Zero papers, and it uses the simplest "Neural" network architecture possible: A linear function from the current boolean board to the next move. (A 1895 x 4095 matrix.)

If all you want is 2000 ELO this should be more than enough. Fastchess is 1700-1800 ELO and it is written in Python.

You need some data to train on. The best, easily accessible data is the cclr-v3 data from http://data.lczero.org/files/
Hi Thomas, I was searching for the simplest "Neural" network architecture possible and came across your fastchess repo on github. It claims: "Predicts the best chess move with 27.5% accuracy by a single matrix multiplication" - does single matrix multiplication mean that the NN is essentially a single layer perceptron without hidden layers at all? And another question is how to build the model without involving fasttext library? Did you use it just to avoid converting board to input vectors?
thomasahle
Posts: 94
Joined: Thu Feb 27, 2014 8:19 pm

Re: AlphaZero

Post by thomasahle »

Hi Maksim,

Your assertion is correct: Logistic regression is just like a neural network/perceptron with no hidden layers.

I used FastText because it is a very fast logistic regression algorithm on sparse inputs. You could use something else too if you'd like.