AlphaZero
Moderators: hgm, Rebel, chrisw
-
- Posts: 16
- Joined: Tue Apr 14, 2020 1:15 pm
- Full name: Pawel Wojcik
AlphaZero
Hello, Im trying to develop my own Alpha Zero engine. In the original paper authors are using board representation with information about previous moves. My goal is to achieve engine about 2000 ELO, I don't have too much expectations. Using standard 773 bits board representation (12x8x8 + castling + move) would work? Do I need pretrain that model? Do I have to use convolutional layers? Thanks in advance.
-
- Posts: 536
- Joined: Thu Mar 09, 2006 3:01 pm
Re: AlphaZero
Suggest reviewing this first:
https://github.com/Zeta36/chess-alpha-zero
or here for a simpler game like Connect4:
https://github.com/suragnair/alpha-zero-general
https://github.com/Zeta36/chess-alpha-zero
or here for a simpler game like Connect4:
https://github.com/suragnair/alpha-zero-general
-
- Posts: 16
- Joined: Tue Apr 14, 2020 1:15 pm
- Full name: Pawel Wojcik
Re: AlphaZero
I'm on the stage where I implemented MCTS and I have some random NN with few layers just to test working.
Zeta36 is using input of size (18, 8, 8)
18 - 12 piece type, 4 castling rights, 1 for fifty moves rule, 1 for en-passant
Information about side to move isn't necessary?
Author is using many many layers, doesn't it affect on the performance?
So many bits used for castling, en passant and fifty moves rules are used only because of normalization, right?
Zeta36 is using input of size (18, 8, 8)
18 - 12 piece type, 4 castling rights, 1 for fifty moves rule, 1 for en-passant
Information about side to move isn't necessary?
Author is using many many layers, doesn't it affect on the performance?
So many bits used for castling, en passant and fifty moves rules are used only because of normalization, right?
-
- Posts: 536
- Joined: Thu Mar 09, 2006 3:01 pm
Re: AlphaZero
Sometimes the answer is simply because that's what Alpha Zero did...
Experiment and see what works for you after you have established a working baseline.
Have fun.
Experiment and see what works for you after you have established a working baseline.
Have fun.
-
- Posts: 16
- Joined: Tue Apr 14, 2020 1:15 pm
- Full name: Pawel Wojcik
Re: AlphaZero
Pretraining that author executed is based on grandmaster games. Policy output was hot-one for move chosen by GM?
-
- Posts: 1242
- Joined: Sat Jul 05, 2014 7:54 am
- Location: Southwest USA
Re: AlphaZero
NN Policy Output....
https://towardsdatascience.com/policy-n ... 2776056ad2
http://web.mst.edu/~gosavia/neural_networks_RL.pdf
https://www.researchgate.net/publicatio ... y_Gradient
https://people.eecs.berkeley.edu/~svlev ... mfcgps.pdf
https://flyyufelix.github.io/2017/10/12/dqn-vs-pg.html
All the top GM (and master games) should be integrated into the learning process..and if successful engine strength should be at a minimum 2500+ (See Giraffe chess engine)....research models are a bit complex....good luck! AR
-
- Posts: 737
- Joined: Wed Mar 08, 2006 8:08 pm
- Location: Orange County California
- Full name: Stuart Cracraft
Re: AlphaZero
Skip it.
Stand up lczero.org and enjoy.
No need to reinvent the wheel.
Leela later will be more Tal-like once they fix it.
Stuart
Stand up lczero.org and enjoy.
No need to reinvent the wheel.
Leela later will be more Tal-like once they fix it.
Stuart
-
- Posts: 1437
- Joined: Wed Apr 21, 2010 4:58 am
- Location: Australia
- Full name: Nguyen Hong Pham
Re: AlphaZero
Do you want to divide the computer chess world into only two groups: lc0 clone and traditional alpha-beta?smcracraft wrote: ↑Tue Apr 28, 2020 2:56 am Skip it.
Stand up lczero.org and enjoy.
No need to reinvent the wheel.
https://banksiagui.com
The most features chess GUI, based on opensource Banksia - the chess tournament manager
The most features chess GUI, based on opensource Banksia - the chess tournament manager
-
- Posts: 16
- Joined: Tue Apr 14, 2020 1:15 pm
- Full name: Pawel Wojcik
Re: AlphaZero
I'm not trying to compete with LeelaChessZero or AlphaZero. I'm trying to develop my own ALphaZero for academic purpose (my Thesis). I just want to dispel some doubts.smcracraft wrote: ↑Tue Apr 28, 2020 2:56 am Skip it.
Stand up lczero.org and enjoy.
No need to reinvent the wheel.
Leela later will be more Tal-like once they fix it.
Stuart
-
- Posts: 16
- Joined: Tue Apr 14, 2020 1:15 pm
- Full name: Pawel Wojcik
Re: AlphaZero
I have another question according to this topic. When I get my policy from neural network (in my case 4096 numbers), let's say only 3 moves are legal with policy values 0.1, 0.1, 0.1. This affects on proportions postion reward to visits value in my U(s, a). Do I have to normalize policy values and how can I do this?