Winter NN Training Script

Discussion of anything and everything relating to chess playing software and machines.

Moderators: hgm, Rebel, chrisw

jorose
Posts: 358
Joined: Thu Jan 22, 2015 3:21 pm
Location: Zurich, Switzerland
Full name: Jonathan Rosenthal

Winter NN Training Script

Post by jorose »

I have finally found the time and motivation to publish my NN training script (actually training repository) for Winter!

Keep in mind that I mostly wrote this for my own personal use,so have mercy on my coding practices... That being said, I added some comments and it should be reasonably easy to understand what I am doing at each step.

It is my intention to keep this roughly up to date with Winter. That is to say when I make breaking changes to Winter I will push changes to the script at the same time.

At the moment there is only a training script, but in the future I might add further resources, such as an example dataset. It is also likely that there will be a new script once I make move ordering NN based :wink:

https://github.com/rosenthj/WinterTrain ... ning.ipynb
-Jonathan
dkappe
Posts: 1631
Joined: Tue Aug 21, 2018 7:52 pm
Full name: Dietrich Kappe

Re: Winter NN Training Script

Post by dkappe »

Thanks for this. Will see if I can provide comments and something interesting.
Fat Titz by Stockfish, the engine with the bodaciously big net. Remember: size matters. If you want to learn more about this engine just google for "Fat Titz".
Daniel Shawul
Posts: 4185
Joined: Tue Mar 14, 2006 11:34 am
Location: Ethiopia

Re: Winter NN Training Script

Post by Daniel Shawul »

Where can i download your network ? If it just weights that you have, could you produce a frozen tensorflow protobuf graph (pb), or
since you are using keras the HDF5 graph works too. I should be able to use it in Scorpio after that once I figure out your input and outputs.
I want to see how it does against my 2x32 net. I started out with a no-policy net too but figured later most of the knowledge is packed in the policy network. The implicit policy was a qsearch() over all the moves.

Cheers
jorose
Posts: 358
Joined: Thu Jan 22, 2015 3:21 pm
Location: Zurich, Switzerland
Full name: Jonathan Rosenthal

Re: Winter NN Training Script

Post by jorose »

dkappe wrote: Sat Nov 09, 2019 7:11 pmThanks for this. Will see if I can provide comments and something interesting.
Thank you for your interest! I don't think I would have published this had not several people repeatedly asked me to. The interest in Winter has generally been very motivating and makes me very happy.
Daniel Shawul wrote: Sat Nov 09, 2019 7:15 pmWhere can i download your network?
https://github.com/rosenthj/Winter/blob ... _weights.h
Daniel Shawul wrote: Sat Nov 09, 2019 7:15 pmIf it just weights that you have, could you produce a frozen tensorflow protobuf graph (pb), or
since you are using keras the HDF5 graph works too.
Unfortunately I don't have the weights in a clean format. This is a side effect from a rather rapid internal development. I don't really expect to need old nets as when I change the input features, old nets become obsolete. Furthermore I have all the old nets in a format that Winter understands, as the net is hardcoded...

Rather high up on my list of priorities is to add saving the netweights in an open format in my training script. If the nets get much larger then I think it makes sense to have the nets be loaded externally. Especially as I would eventually like to support networks from other engines in the long run.
Daniel Shawul wrote: Sat Nov 09, 2019 7:15 pmI should be able to use it in Scorpio after that once I figure out your input and outputs. I want to see how it does against my 2x32 net.
I am also quite curious, unfortunately it isn't that simple for now. Winter's NN is a set of handcrafted features fed into a regular fully connected NN and not a CNN. This is a design choice as I don't have access to a GPU so I am trying to keep things very small. Winters net is currently #inputs x 16 x 16 x 2. #inputs is roughly 500 for the purpose of the training dataset, but in practise it is much smaller as most features are sparse. I think in practise it is more like 30x16x16x2.

I will likely add support for CNNs soon, as I have some ideas in that regard. Support for GPUs on the other hand is not really on the horizon, as I don't have access to a GPU and I am unsure if GPU makes sense without a codebase that supports batching. Perhaps I will make a zombified experimental Winter, taking GPU based code from Leela and seeing if I can run it on google colab or something of the sort, but I am skeptical about that being anything more than a very experimental branch until I have a GPU and time. So maybe 5 years from now? (*cries in PhD student).
Daniel Shawul wrote: Sat Nov 09, 2019 7:15 pmI started out with a no-policy net too but figured later most of the knowledge is packed in the policy network. The implicit policy was a qsearch() over all the moves.
I very much agree that policy is an extremely important strength of NN based engines. I would go as far as to argue NN engines greatest strengths are policy and pawn structure evaluation. Unfortunately policy is also something which is inherently heavyweight. I am hoping that by having my network be very lightweight I can fully utilize classical heuristics such as history heuristics. Since I am doing a pure AB search I only need the ordering of moves and not the actual move priors. I want to try using NN for move ordering and at higher depth nodes I want to try a policy network.

Something that I think could be very good is having heuristics such as move history be an input for a policy network. Due to the fact that NN based engines are mostly MCTS based, I don't think anyone has tried this yet, but I believe such dynamic heuristics are very powerful. I think having a NN interpret them and be able to put moves in context to one another is potentially an extremely powerful mechanic.
-Jonathan
Daniel Shawul
Posts: 4185
Joined: Tue Mar 14, 2006 11:34 am
Location: Ethiopia

Re: Winter NN Training Script

Post by Daniel Shawul »

Rather high up on my list of priorities is to add saving the netweights in an open format in my training script. If the nets get much larger then I think it makes sense to have the nets be loaded externally. Especially as I would eventually like to support networks from other engines in the long run.
Keras makes life easier and a model.save() should do it to save the weights with the network architecture. One thing I didn't like about the Lc0 approach is the fact that the weights and network architecture are dealt with separately, so you have this unnecessary code to build the network on the fly and then put the weights in it. It makes things rather cumbersome for sharing networks and using existing optimized inference libraries.
I am also quite curious, unfortunately it isn't that simple for now. Winter's NN is a set of handcrafted features fed into a regular fully connected NN and not a CNN. This is a design choice as I don't have access to a GPU so I am trying to keep things very small. Winters net is currently #inputs x 16 x 16 x 2. #inputs is roughly 500 for the purpose of the training dataset, but in practise it is much smaller as most features are sparse. I think in practise it is more like 30x16x16x2.
I think one thing Giraffe missed compared to A0 was that it was using small net ( there is a limit what you can learn with it) and that no GPU support. Sure it did demonstrate that you can match or better a hand-written eval with a small net -- it even beat stockfish's eval on a positional test suite. Also the policy network was a big gain especially in GO so that was missing too. Giraffe may have done something similar to facilitate move selection with its probability based search though ...
I very much agree that policy is an extremely important strength of NN based engines. I would go as far as to argue NN engines greatest strengths are policy and pawn structure evaluation. Unfortunately policy is also something which is inherently heavyweight. I am hoping that by having my network be very lightweight I can fully utilize classical heuristics such as history heuristics. Since I am doing a pure AB search I only need the ordering of moves and not the actual move priors. I want to try using NN for move ordering and at higher depth nodes I want to try a policy network.

Something that I think could be very good is having heuristics such as move history be an input for a policy network. Due to the fact that NN based engines are mostly MCTS based, I don't think anyone has tried this yet, but I believe such dynamic heuristics are very powerful. I think having a NN interpret them and be able to put moves in context to one another is potentially an extremely powerful mechanic.
Good luck with your experiments. There are a ton of ideas out there but the heavy infrastructure needed or NN is a huge impediment i think.
Things are getting easier though compared to a few years ago ..
giovanni
Posts: 142
Joined: Wed Jul 08, 2015 12:30 pm

Re: Winter NN Training Script

Post by giovanni »

Nice. Thanks for sharing. I just wonder which pgn file did you use. Did you generate it by yourself? In order to replicate as much as possible your data, would it be possible to get it?
Thanks again.