I am open sourcing my (so far) private UCI engine Winter. Before getting into any further details I would like to thank everyone on this forum and all the contributors to the Chess Programming Wiki as you have greatly aided me in my little project. I would further like to thank the authors of open source chess projects, especially the authors of Stockfish (which are at fault for me loving templates), the authors of cutechess and HGM, for his enumerable contributions to the chess world. I would also like to thank Jonas Kuratli and Jonathan Maurer whom I will again mention in a paragraph about Winter's origins.
I develop and play with Winter exclusively on Linux. If you are on Linux you can clone the git repo and call "make" or "make no_bmi" in order to build the engine. Keep in mind that the program expects to find the params and search_params directories containing the evaluation and search parameters respectively.
If you are on Windows or OSX I assume compilation shouldn't be too hard as the source does not rely on any libraries aside from STL which is used extensively (including std::thread in order to poll for input), however you may need to do a bit more work to get things running and I really am not an expert.
General Features
Rating Guesstimate: 2500 CCRL (a bit less wouldn't surprise me)
Exclusively relies on STL
Nice print command =)
Perft command
UCI protocol
Single threaded search, so it is quite stable
Search Features
Alpha Beta search with PVS
Fail hard framework... Not completely strict about it.
Move ordering is based on the linear part of a Logistic Regression classifier
The classifier is trained via temporal difference learning to predict whether a move will return beta
Classifier considers TT moves, killers, move types, move source and destinations, capture targets, SEE for interesting moves, square of the last move piece, checks and changes between forcing and unforcing moves (a capture is more likely after another capture)
Null Move Reductions
Static Null Move Pruning
Quiescence Search
Static Exchange Evaluation
Late Move Reductions
Futility Pruning
Evaluation Features
Non standard approach
Relies on a mixture model
Assumes positions encountered in search come from some set of k Gaussians
Mixture model is trained via EM algorithm either on database games or positions sampled from search
For each Gaussian a separate evaluation function is trained. When the evaluation function is called the relative probability a position stems from each Gaussian is estimated, the evaluation functions are computed and the final score is returned as the weighted average.
Parameter weights are trained via a mixture of reinforcement (temporal difference) learning and supervised learning
Pure supervised learning leads to a significantly weaker engine (around 50 elo iirc... been a while since I last tested this)
Pure temporal difference learning wouldn't work how I have it implemented at the moment, but I imagine should be beneficial
We are minimizing the cross entropy loss of a Logistic Regression model for each of the k Gaussians. I also tried an Ordered Logit Model with equivalent strength.
Training converges very fast as we have a linear model at the heart
Parameters focus more on piece play and activity and not so much on the endgame and pawn structures. This leads to Winter loving very active play and speculative material sacrifices being extremely common.
Modular code structure makes it very easy to add new parameters
Engine Origin: Winter started its live as a group project in a university course on parallel computing. Despite this Winter is not a parallel engine! Jonas Kuratli, Jonathan Maurer and I set out to try to write a barebones chess engine that would always return an equivalent result as a minimax search to the same depth. Our goal was then to parallelize this engine and to be able to directly compare time to depth results. At the time we had a rather natural way of splitting the work since I was significantly more interested in chess and had previous chess programming experience. I wrote the majority of the codebase while my friends focused much more time on the (more difficult) parallel programming and testing portion of the codebase. While one of the first things I did was after forking the engine was remove the parallel portion of the code, I am sure small parts of their work can still be found in different parts of the code. Specifically I can recall Jonas writing the original version of the SetBoardFen function and Jonathan working on the TT code.
Awesome! Contratz on your first release! Too bad there's no Windows binaries. If you ever release them, or if someone else compiles the engine I'll gladly put it in my tournament for Season 2. With a 2500 rating it will fight comfortably for the high spots in the entry league
Edit: I see that you haven't given a version number yet, is it still a development version?
The release is kind of an arbitrary point in time with respect to development. I feel in order to call it a 1.0 release I would have to be able to send a copy to my chess playing friends who are all reliant on Windows or OSX releases. Fortunately one friends was able to compile it without any issues on OSX and the probability has grown that I will have a Windows binary very soon as well, so I am holding off with the Github release feature for now.
As a small update a friend of mine (thanks again Julian Croci!) compiled Winter without issues on OSX. I added an OSX binary in the source directory for now until I use the Github release feature, however for OSX, if you have a compiler installed I recommend just cloning the directory and calling "make" or "make no_bmi" in order to generate a binary for your own system.
When I saw the output of "go infinite" on my friends system it looked reasonable, however slightly different than when I run it on my own. I am still investigating the cause for this and it would be cool if someone with a Mac could confirm that it is still playing on a level which isn't completely unexpected (eg it should be handily crushing any sub 2000 human or engine in most games).
Finally, I might have a Windows compile soon as well in which case I would tag this release in Github as v1.0 Beta =)
While it did originate as a university project, it had long ceased to be by the time I added the mixture model and EM algorithms. That being said, here are some links you might find useful.
While implementing the EM algorithm I myself relied on these notes from the University of California Irvine.
I recommend first reading into the K-means algorithm, if you are unaware of that, before getting into that. This link from Stanford explains the K-means algorithm. I am not content with how Wikipedia describes GMMs as I feel the lacking visual representation would make it much easier. Instead I recommend reading some of the slides from this set of slides from an MIT lecture. Note that I only briefly skimmed both of these links now as I learned about these algorithms well before working on Winter and I don't think the slides presented in those lectures are actually publicly available.
I added a Windows version today. It would be cool if some of you could confirm it is indeed working.
There are some caveats under windows unfortunately:
In my brief testing I was getting at least 50% more NPS on Linux compared to Windows. A bit of a performance penalty was expected, but losing that much feels extreme. Perhaps someone else has a suggestion?
The Windows version is hilariously huge, just comparing the zip file for OSX (138 KB) with the Windows one (4.8 MB) it's fairly extreme. I imagine this is due to compiling with -static, but I am not sure how to circumvent this. I suppose this point and the previous one are linked.
It seems unicode characters are apparently still too state of the art for out of the box Windows support, so unfortunately my print commands are quite useless. I suspect Windows users won't spend much time in a console however, so I don't intend to invest any energy on this in the near future.
Finally a point I only just realized was that my perft_test command doesn't work on my Windows machine. I am not certain why this doesn't work, but as long as everything else works I am fine with that, as it is mostly just a feature for people compiling the engine themselves.
jorose wrote:I added a Windows version today. It would be cool if some of you could confirm it is indeed working.
There are some caveats under windows unfortunately:
In my brief testing I was getting at least 50% more NPS on Linux compared to Windows. A bit of a performance penalty was expected, but losing that much feels extreme. Perhaps someone else has a suggestion?
The Windows version is hilariously huge, just comparing the zip file for OSX (138 KB) with the Windows one (4.8 MB) it's fairly extreme. I imagine this is due to compiling with -static, but I am not sure how to circumvent this. I suppose this point and the previous one are linked.
It seems unicode characters are apparently still too state of the art for out of the box Windows support, so unfortunately my print commands are quite useless. I suspect Windows users won't spend much time in a console however, so I don't intend to invest any energy on this in the near future.
Finally a point I only just realized was that my perft_test command doesn't work on my Windows machine. I am not certain why this doesn't work, but as long as everything else works I am fine with that, as it is mostly just a feature for people compiling the engine themselves.
Well some people still use cmd sometimes on Win ;-)
Here is the raw output from initial position with default settings for 'go depth 15' on my old quadcore with Win7-64 Ultimate:
(the result in this run equals ~ 378.76 kn/s)