Page 1 of 3

AlphaZero

Posted: Sun Apr 26, 2020 7:14 pm
by Fafkorn
Hello, Im trying to develop my own Alpha Zero engine. In the original paper authors are using board representation with information about previous moves. My goal is to achieve engine about 2000 ELO, I don't have too much expectations. Using standard 773 bits board representation (12x8x8 + castling + move) would work? Do I need pretrain that model? Do I have to use convolutional layers? Thanks in advance.

Re: AlphaZero

Posted: Sun Apr 26, 2020 10:27 pm
by brianr
Suggest reviewing this first:

https://github.com/Zeta36/chess-alpha-zero

or here for a simpler game like Connect4:

https://github.com/suragnair/alpha-zero-general

Re: AlphaZero

Posted: Mon Apr 27, 2020 7:26 pm
by Fafkorn
I'm on the stage where I implemented MCTS and I have some random NN with few layers just to test working.
Zeta36 is using input of size (18, 8, 8)
18 - 12 piece type, 4 castling rights, 1 for fifty moves rule, 1 for en-passant

Information about side to move isn't necessary?
Author is using many many layers, doesn't it affect on the performance?
So many bits used for castling, en passant and fifty moves rules are used only because of normalization, right?

Re: AlphaZero

Posted: Mon Apr 27, 2020 9:07 pm
by brianr
Sometimes the answer is simply because that's what Alpha Zero did...

Experiment and see what works for you after you have established a working baseline.

Have fun.

Re: AlphaZero

Posted: Mon Apr 27, 2020 9:57 pm
by Fafkorn
Pretraining that author executed is based on grandmaster games. Policy output was hot-one for move chosen by GM?

Re: AlphaZero

Posted: Mon Apr 27, 2020 10:51 pm
by supersharp77
Fafkorn wrote: Mon Apr 27, 2020 9:57 pm Pretraining that author executed is based on grandmaster games. Policy output was hot-one for move chosen by GM?
NN Policy Output....



https://towardsdatascience.com/policy-n ... 2776056ad2

http://web.mst.edu/~gosavia/neural_networks_RL.pdf

https://www.researchgate.net/publicatio ... y_Gradient

https://people.eecs.berkeley.edu/~svlev ... mfcgps.pdf

https://flyyufelix.github.io/2017/10/12/dqn-vs-pg.html

All the top GM (and master games) should be integrated into the learning process..and if successful engine strength should be at a minimum 2500+ (See Giraffe chess engine)....research models are a bit complex....good luck! AR :D :wink:

Re: AlphaZero

Posted: Tue Apr 28, 2020 2:56 am
by smcracraft
Skip it.

Stand up lczero.org and enjoy.

No need to reinvent the wheel.

Leela later will be more Tal-like once they fix it.

Stuart

Re: AlphaZero

Posted: Tue Apr 28, 2020 4:45 am
by phhnguyen
smcracraft wrote: Tue Apr 28, 2020 2:56 am Skip it.

Stand up lczero.org and enjoy.

No need to reinvent the wheel.
Do you want to divide the computer chess world into only two groups: lc0 clone and traditional alpha-beta? :wink: :D

Re: AlphaZero

Posted: Tue Apr 28, 2020 11:42 am
by Fafkorn
smcracraft wrote: Tue Apr 28, 2020 2:56 am Skip it.

Stand up lczero.org and enjoy.

No need to reinvent the wheel.

Leela later will be more Tal-like once they fix it.

Stuart
I'm not trying to compete with LeelaChessZero or AlphaZero. I'm trying to develop my own ALphaZero for academic purpose (my Thesis). I just want to dispel some doubts.

Re: AlphaZero

Posted: Tue Apr 28, 2020 6:42 pm
by Fafkorn
I have another question according to this topic. When I get my policy from neural network (in my case 4096 numbers), let's say only 3 moves are legal with policy values 0.1, 0.1, 0.1. This affects on proportions postion reward to visits value in my U(s, a). Do I have to normalize policy values and how can I do this?