Page 9 of 39

Re: Alphazero news

Posted: Sat Dec 08, 2018 4:17 am
by matthewlai
glennsamuel32 wrote: Sat Dec 08, 2018 3:21 am Hello Matthew, nice to see you back after so long !!

Does this mean Giraffe will get some updates in the future ? :D
Thanks!

Afraid not! AlphaZero is taking up all my time these days, and it's a very exciting project with lots of uncharted territory ahead :). AlphaZero is basically what I ever wanted Giraffe to become... and then a lot more. I have never been this excited about computer chess my whole life.

Re: Alphazero news

Posted: Sat Dec 08, 2018 4:59 am
by Albert Silver
matthewlai wrote: Sat Dec 08, 2018 4:17 am
glennsamuel32 wrote: Sat Dec 08, 2018 3:21 am Hello Matthew, nice to see you back after so long !!

Does this mean Giraffe will get some updates in the future ? :D
Thanks!

Afraid not! AlphaZero is taking up all my time these days, and it's a very exciting project with lots of uncharted territory ahead :). AlphaZero is basically what I ever wanted Giraffe to become... and then a lot more. I have never been this excited about computer chess my whole life.
Warmest congrats. I find that heartwarming to the extreme. :D :D :D

Re: Alphazero news

Posted: Sat Dec 08, 2018 5:42 am
by yanquis1972
Laskos wrote: Fri Dec 07, 2018 3:47 pm
OneTrickPony wrote: Fri Dec 07, 2018 12:13 pm
I am not convinced the newest SF would win against it. The ELO is calculated against a pool of similar engines. It's not clear if 50 or 100 ELO more against this pool is equal to 50-100 ELO more against an opponent of a different type.
While that's true, Lc0 with the best nets on my powerful GPU and average CPU ("Leela Ratio" of say 2.5) beats heavily SF8, but loses slightly to SF10, from regular openings. Against SF8, it's similar to what happens in this paper. My guess is that this particular "old" A0 in those TCEC conditions is somewhat weaker than SF10.
Lc0 needs a "Leela Ratio" of 2.5 to have similar results to A0 ("Leela Ratio" 1 by definition), so Lc0 (with the best nets) is still lagging pretty significantly behind A0.
In some games it becomes apparent that they are fairly similar in playing style, strengths and weaknesses.
but have you tried the deepmind openings? it seems clear that the Zeros lose elo playing prescribed openings. Test30 already has excellent results against SF10 in my tests so far (https://lichess.org/study/FHjaPySh) i would expect the strongest test10 net to score better, but i believe it's gotten very close.

Re: Alphazero news

Posted: Sat Dec 08, 2018 6:27 am
by glennsamuel32
Matthew, could you divulge the size of the network file that A0 used ?

Re: Alphazero news

Posted: Sat Dec 08, 2018 8:56 am
by cdani
matthewlai wrote: Sat Dec 08, 2018 4:17 am
glennsamuel32 wrote: Sat Dec 08, 2018 3:21 am Hello Matthew, nice to see you back after so long !!

Does this mean Giraffe will get some updates in the future ? :D
Thanks!

Afraid not! AlphaZero is taking up all my time these days, and it's a very exciting project with lots of uncharted territory ahead :). AlphaZero is basically what I ever wanted Giraffe to become... and then a lot more. I have never been this excited about computer chess my whole life.
Congratulations!!! I don't understand anything about A0, but I will try that Andscacs finds ways to play better against it :-)

Re: Alphazero news

Posted: Sat Dec 08, 2018 12:29 pm
by Alexander Schmidt
Astatos wrote: Fri Dec 07, 2018 1:01 pm OK what we know :
1) Stockfish is the best engine in the world
2) LC0 guys did manage to reverse engineer A0 successfully
3) LC0 and A0 roughly at the same strength
4) NN are not less resource hungry than Alpha Beta
5) Scalability is about the same in both methods
6) Google has unacceptable behaviour, hiding data, obfuscating opponents and hyping results
I would rather say what we know is:
1) We have real new kind of chess engines
2) Computer chess enthusiasts should be happy and excited
3) We are at the very beginning of a new age of computer chess, nn will dominate computer chess in the future
4) Google shares the knowledge (what they don't have to do)
5) Some people make us a gift in reproducing the work (what they don't have to do)
6) Instead of saying "thank you" some fanboys of the common engines have an unacceptable behavior.

The only thing I really dislike is, that Google don't want to build AI to solve Chess, Go or whatever games. They want to learn how to use AI for other purposes and I think this will be dangerous for mankind. At the moment an AI decides which commercials we see, which news we read. That leads to radicalization and division of the society. One day an AI will decide if someone goes to jail or not. Maybe people will go to jail because an AI thinks he will someday do a crime. One day autonomous robots will decide which person to kill, on a battlefield or to prevent a possible crime. Maybe one day an AI will press the red button.

Re: Alphazero news

Posted: Sat Dec 08, 2018 12:39 pm
by nabildanial
Alexander Schmidt wrote: Sat Dec 08, 2018 12:29 pm Maybe people will go to jail because an AI thinks he will someday do a crime. One day autonomous robots will decide which person to kill, on a battlefield or to prevent a possible crime. Maybe one day an AI will press the red button.
We have the so-called "Ethics of artificial intelligence" to prevent those things from happening.

Re: Alphazero news

Posted: Sat Dec 08, 2018 12:45 pm
by matthewlai
glennsamuel32 wrote: Sat Dec 08, 2018 6:27 am Matthew, could you divulge the size of the network file that A0 used ?
The details are in supplementary materials:
Architecture
Apart from the representation of positions and actions described above, AlphaZero uses the
same network architecture as AlphaGo Zero (9), briefly recapitulated here.
The neural network consists of a “body” followed by both policy and value “heads”. The
body consists of a rectified batch-normalized convolutional layer followed by 19 residual blocks (48).
Each such block consists of two rectified batch-normalized convolutional layers with a skip connection.
Each convolution applies 256 filters of kernel size 3 ⇥ 3 with stride 1. The policy head
applies an additional rectified, batch-normalized convolutional layer, followed by a final convolution
of 73 filters for chess or 139 filters for shogi, or a linear layer of size 362 for Go,
representing the logits of the respective policies described above. The value head applies an
additional rectified, batch-normalized convolution of 1 filter of kernel size 1 ⇥ 1 with stride 1,
followed by a rectified linear layer of size 256 and a tanh-linear layer of size 1.

Re: Alphazero news

Posted: Sat Dec 08, 2018 1:16 pm
by Rein Halbersma
So why is it that A0's learning curve seems to flatten to almost no progress beyond it's current level? If e.g. the number of layers or channels is expanded, would you expect that a few hundred Elo more could be obtained? Or is A0 approaching perfection with its current network and is an absolute upperbound of Elo in sight?

Re: Alphazero news

Posted: Sat Dec 08, 2018 1:38 pm
by nabildanial
matthewlai wrote: Sat Dec 08, 2018 12:45 pm
glennsamuel32 wrote: Sat Dec 08, 2018 6:27 am Matthew, could you divulge the size of the network file that A0 used ?
The details are in supplementary materials:
Architecture
Apart from the representation of positions and actions described above, AlphaZero uses the
same network architecture as AlphaGo Zero (9), briefly recapitulated here.
The neural network consists of a “body” followed by both policy and value “heads”. The
body consists of a rectified batch-normalized convolutional layer followed by 19 residual blocks (48).
Each such block consists of two rectified batch-normalized convolutional layers with a skip connection.
Each convolution applies 256 filters of kernel size 3 ⇥ 3 with stride 1. The policy head
applies an additional rectified, batch-normalized convolutional layer, followed by a final convolution
of 73 filters for chess or 139 filters for shogi, or a linear layer of size 362 for Go,
representing the logits of the respective policies described above. The value head applies an
additional rectified, batch-normalized convolution of 1 filter of kernel size 1 ⇥ 1 with stride 1,
followed by a rectified linear layer of size 256 and a tanh-linear layer of size 1.
I think what glenn meant by the question is how big the filesize is as in MB.