That clarifies things.lkaufman wrote: ↑Mon May 14, 2018 5:54 pmWe plan to give ChessBase at least one free upgrade for Komodo MCTS. I expect they would offer it to their customers free, but it is their decision, not ours.Ozymandias wrote: ↑Mon May 14, 2018 11:10 amI guess they plan to offer free upgrades, at least for this new engine? Otherwise they'll be selling an early prototype that will soon become outdated.
Komodo 12 and MCTS
Moderator: Ras
-
- Posts: 1537
- Joined: Sun Oct 25, 2009 2:30 am
Re: Komodo 12 and MCTS
-
- Posts: 18
- Joined: Mon Jun 19, 2017 4:37 pm
Re: Komodo 12 and MCTS
There is no point taking Komodo from ChessBase. I wont be fooled again.
Not once or twice.... just shortly after the support period from Komodo expires a brand new version pops up.... and guess what? Pay for it again...60 bucks.
Business right?
Not once or twice.... just shortly after the support period from Komodo expires a brand new version pops up.... and guess what? Pay for it again...60 bucks.
Business right?
-
- Posts: 1260
- Joined: Sat Dec 13, 2008 7:00 pm
Re: Komodo 12 and MCTS
There are some papers about Randomized Best First Mini Max in chess. If you understand how MCTS/UCT works you'll see the similarities.
-
- Posts: 1260
- Joined: Sat Dec 13, 2008 7:00 pm
Re: Komodo 12 and MCTS
So the search loses about 330 Elo? I am not sure how terrible this is. It's hard to compare given the relative development effort done on both.
But winning back 330 Elo is not going to be easy after the initial quick gains.
-
- Posts: 1260
- Joined: Sat Dec 13, 2008 7:00 pm
Re: Komodo 12 and MCTS
You don't average mates or draws-by-rule, I presume

Well, yes and no. There's reasons why people tried MCTS with playouts and then later neural networks, over alpha-beta. You can't entirely decouple those concepts. If it was so easy, they would not lose 330 Elo.they use a neural network, which Komodo doesn't. MCTS has nothing to do with it.
But that doesn't mean mixing them up won't work. For sure a neural network evaluation in an alpha-beta searcher works fine. As for the other way around, that's up to the Komodo guys to prove, right.
-
- Posts: 4185
- Joined: Tue Mar 14, 2006 11:34 am
- Location: Ethiopia
Re: Komodo 12 and MCTS
You are absolutely wrong about the NN unable to capture five centuries of human chess knowledge. Though I sympathize with the fact that your job in komodo might be taken over by computers now, I have no doubt the NN eval can be better than anyy hand written evaluation and infact LCZero already proved it IMO -- Kai estimates LCZero is 3300 elo positional but only 2000 elos tactical.lkaufman wrote: ↑Mon May 14, 2018 5:51 pmThe eval for Komodo MCTS is different than the eval for normal Komodo, but they are related. No NN planned as of now, but that doesn't mean we won't try it. As I wrote in my article for New In Chess (about AlphaZero), I was not convinced that their success was due to NN, more likely due to MCTS and hardware. I'm doubtful that NN can do as well as five centuries of accumulated human knowledge about chess.
GPU is not useful for MCTS, only for NN. If we find a way to make good use of GPU, we will do so.
The only problem is NN evaluation s too slow that it needs massive hardware acceleration. I have a NN scorpio running with Tensorflow now. Even using single neuron NN (all inputs are weighed and summed just like in standard evaluaiton), the nps goes down by a factor of 50x. This is because of the massive overhead of 20 micro-second per session evaluation call in TF which brought down the nps from 1.2 Mnps to just 30 knps. Then when I had a 1 block x 64 filters resnet it went down to 5 knps ect. In terms of nps, LCZero is doing pretty well with its big 15x192 network.
The NN evaluation is going to be so slow that the only feasible search becomes the highly selective MCTS not the full-width alpha-beta. The A0 guys have no choice in that regard. So when you say that MCTS is the key to their success, it is actually not, there are better algorithms for chess but most are not feasible with such a slow evaluation even after acceleration with GPU.
-
- Posts: 4185
- Joined: Tue Mar 14, 2006 11:34 am
- Location: Ethiopia
Re: Komodo 12 and MCTS
This is all very basic MCTS staff. I have a dynamic exploration coefficient that decreases at time runs out. Using lower exploration coefficient makes your search very selective and often helps tactical strength.mjlef wrote: ↑Mon May 14, 2018 5:13 pm
I and Larry have read about MCTS for years, following progress of the Go programs (Don Dailey had a Go program using Monte Carlo). But I am not claiming anything like "years of research". Although the the basic scheme we are using has been discussed between us for several years, we have only actively worked on it for the last month or two. We tried several variants and tuned the initial method, which was only giving us elos in the mid 2000s. But we found ways to improve it, with some changes giving us 100+ elo gains. The Exploit/Explore ratio is particularly important.
No you can't. Plain MCTS converges to minmax tree not an alpha-beta tree because it has no concept of bounds at all. Your MCTS search is very basic with the only modification you mentioned that formula is slightly different. That begs the question of how you mixed it with alpha-beta. You need some sort of best-first alpha-beta searcher (i use alpha-beta rollouts MCTS) to give you bounds period.As for search, you can still use aspiration on a search even if you do not have especially useful bounds.
Sure, use a qsearch() like I do. That is better than hoping for a NN to solve tactics. But to solve shallow tactics like 4-plies or 8-plies, not having proper bounds for those shallow searches becomes a problem. As I mentioned the stockifsh mcts used 8-plies alpha-beta search at the leaves but didn't do that well because one has no idea of bounds at the leaves other than (-Mate, Mate)We also found tuning search parameters to be a big help. As for elo, although we have followed your recent postings, what we are doing is similar, but with a lot of differences from what you have posted, so it is not surprising we get different results. We found sticking with our initial idea to be pretty good. But we have more to learn.
All MCTS schemes seems to expand the tree a lot slower than the nps a typical chess engine gets. The neural network engines use an evaluation capable of predicting things like piece swapoffs. But there are other ways of getting that.
Time will tell, I won't be surprized if it actually lost 500-600 elo with the kind of vanilla MCTS search you described.As for "commercial stunt", that is simply untrue. Before passing judgement, how about taking a look at how the program behaves? Releasing an MCTS mode that is hundreds of elo weaker than what a program gets with standard search is not exactly a headline grabber. But we find its moves/search interesting and useful.
-
- Posts: 1339
- Joined: Fri Nov 02, 2012 9:43 am
- Location: New Delhi, India
Re: Komodo 12 and MCTS
MCTS or Neural Network, I couldn't care less ; if Komodo CAN'T use the GPU (or TPUs), its simply not upto snuff, and that's all there is to it.CMCanavessi wrote: ↑Mon May 14, 2018 3:13 pm No, you're completely wrong. The only reason why Leela and A0 benefit from GPU (or TPU) is because they use a neural network, which Komodo doesn't. MCTS has nothing to do with it.
i7 5960X @ 4.1 Ghz, 64 GB G.Skill RipJaws RAM, Twin Asus ROG Strix OC 11 GB Geforce 2080 Tis
-
- Posts: 1494
- Joined: Thu Mar 30, 2006 2:08 pm
Re: Komodo 12 and MCTS
A neural network is used in AlphaZero and Leela to predict winning chances. This is used instead of playouts. To be effective, a neural network has to simulate how the rest of the game would turn out. Evaluations and searches in regular chess programs are trying to do the same thing. We think that could be quite powerful if the right amount of tune search and eval are used. Time will tell, but the results are encouraging.Gian-Carlo Pascutto wrote: ↑Mon May 14, 2018 7:02 pmYes, MTCS does average draws and mates. Draws have a 0.5, and giving mate has a 1.0 chance of winning, being mated a 0.0 chance.You don't average mates or draws-by-rule, I presume![]()
Well, yes and no. There's reasons why people tried MCTS with playouts and then later neural networks, over alpha-beta. You can't entirely decouple those concepts. If it was so easy, they would not lose 330 Elo.they use a neural network, which Komodo doesn't. MCTS has nothing to do with it.
But that doesn't mean mixing them up won't work. For sure a neural network evaluation in an alpha-beta searcher works fine. As for the other way around, that's up to the Komodo guys to prove, right.
-
- Posts: 6213
- Joined: Sun Jan 10, 2010 6:15 am
- Location: Maryland USA
- Full name: Larry Kaufman
Re: Komodo 12 and MCTS
If it plays better on your hardware without using GPU than any other MCTS engine does with GPU, what is the problem? Some algorithms like CPUs better, some like GPUs better.shrapnel wrote: ↑Mon May 14, 2018 7:51 pmMCTS or Neural Network, I couldn't care less ; if Komodo CAN'T use the GPU (or TPUs), its simply not upto snuff, and that's all there is to it.CMCanavessi wrote: ↑Mon May 14, 2018 3:13 pm No, you're completely wrong. The only reason why Leela and A0 benefit from GPU (or TPU) is because they use a neural network, which Komodo doesn't. MCTS has nothing to do with it.
Komodo rules!