Komodo 12.1

Discussion of anything and everything relating to chess playing software and machines.

Moderators: bob, hgm, Harvey Williamson

Forum rules
This textbox is used to restore diagrams posted with the [d] tag before the upgrade.
Uri Blass
Posts: 8729
Joined: Wed Mar 08, 2006 11:37 pm
Location: Tel-Aviv Israel

Re: Komodo 12.1

Post by Uri Blass » Tue Jun 05, 2018 5:15 am

Branko Radovanovic wrote:
Mon Jun 04, 2018 10:09 am
I just have to ask: are there perhaps plans to enter Komodo MCTS into TCEC 13? My understanding is that TCEC rules would actually permit entering both classic and MCTS versions of Komodo, as the two are fundamentally different, i.e. the MCTS version can not be considered a "derivative" of classic Komodo. It is already very competitive, which is remarkable, so I would really like to see it in action - and I'm certainly not alone.
My understanding is that komodo MCT is a derivative of classic Komodo.

I do not understand how people can think different when probably more than 90% of the chess code is the same.
Note that I have no problem if tournaments allow derivatives to compete.

Werewolf
Posts: 1214
Joined: Thu Sep 18, 2008 8:24 pm

Re: Komodo 12.1

Post by Werewolf » Tue Jun 05, 2018 8:31 am

Ron Langeveld wrote:
Mon Jun 04, 2018 10:51 pm
Werewolf wrote:
Mon Jun 04, 2018 9:38 pm
Oh. I thought one of the great hopes of this approach was its potential on massive hardware. If it doesn’t scale better than alpha beta >12 cores...the old way will always win as 12 cores will be increasingly common over the next few years.
Just a few posts back Larry did write "I expect we'll find a way to use more than 12 threads before too long."

I suggest you don't develop an accidental blindspot to free info by developers (in general) that tend to be reluctant with info on future versions. You may have misread it, but from where I am sitting I applaud the Komodo team for listening to customers and providing a significant enhancement within a couple of weeks. If you are one of the few Komodo customers with a core count that far exceeds the 12 core boundary for now than posts like this won't get any pity from me ;)

Ron
Missing the point entirely, nothing to do with my hardware but the future of the project and TCEC

Jesse Gersenson
Posts: 575
Joined: Sat Aug 20, 2011 7:43 am
Contact:

Re: Komodo 12.1

Post by Jesse Gersenson » Tue Jun 05, 2018 10:18 am

A month ago, MCTS was a single core program. Now it can run well on 12 cores. They make progress at great speed.
If it doesn’t scale better than alpha beta >12 cores...the old way will always win as 12 cores will be increasingly common over the next few years.
That's post-truth logic. In the old way, 12 is not greater than 12.

Werewolf
Posts: 1214
Joined: Thu Sep 18, 2008 8:24 pm

Re: Komodo 12.1

Post by Werewolf » Tue Jun 05, 2018 11:10 am

The issue is whether scaling of MCTS has inherent properties - like alpha beta did for ages before Lazy - where we can predict with confidence what the efficiency will be on N threads and how hard it gets as N rises. If it turns out that at 12 it begins to drop - and that is inherent with this method (rather than a by-product of a new idea the Komodo team are trying) - it makes a huge difference.

I'm interested in knowing if long-term we can do better than N^ 0.8 etc.

Branko Radovanovic
Posts: 64
Joined: Sat Sep 13, 2014 2:12 pm

Re: Komodo 12.1

Post by Branko Radovanovic » Tue Jun 05, 2018 1:32 pm

Uri Blass wrote:
Tue Jun 05, 2018 5:15 am
My understanding is that komodo MCT is a derivative of classic Komodo.

I do not understand how people can think different when probably more than 90% of the chess code is the same.
Note that I have no problem if tournaments allow derivatives to compete.
It may be a "derivative" in a very broad sense of the word (shares some of the code). However, when you consider the search, it's not just that no code is shared - there is probably zero commonality between the two, whereas all alpha-beta engines search at least in a conceptually similar way, even if the actual code is totally different. In that aspect, Komodo MCTS is a ground-up rewrite in the most literal sense, and therefore not something one would typically describe as a "derivative". That alone makes it very interesting.

(My guess is that, in order to work well for chess, MCTS UCT will need to be - and, in Komodo's case, almost certainly is - enhanced with a number of tricks, possibly transplanted from alpha-beta engines, but that's another matter...)
Last edited by Branko Radovanovic on Tue Jun 05, 2018 1:46 pm, edited 1 time in total.

User avatar
Eelco de Groot
Posts: 4210
Joined: Sun Mar 12, 2006 1:40 am
Location: Groningen

Re: Komodo 12.1

Post by Eelco de Groot » Tue Jun 05, 2018 1:41 pm

by Werewolf » Tue Jun 05, 2018 11:10 am
The issue is whether scaling of MCTS has inherent properties - like alpha beta did for ages before Lazy - where we can predict with confidence what the efficiency will be on N threads and how hard it gets as N rises. If it turns out that at 12 it begins to drop - and that is inherent with this method (rather than a by-product of a new idea the Komodo team are trying) - it makes a huge difference.

I'm interested in knowing if long-term we can do better than N^ 0.8 etc.
Without any theory, I'd say it is some random walk. Monte Carlo with 12 threads is the same as one thread searching twelve times longer? It probably scales well to at least 64 threads because I believe that was what Alpha Zero was using. Duplicate games with almost random moves will be rare in chess. But Stockfish, without a book, on 64 threads Lazy Eval was still very predictable in the opening. I am still suspicious of positions like

[Event "?"]
[Site "?"]
[Date "2018.06.05"]
[Round "?"]
[White "?"]
[Black "?"]
[Result "*"]
[SetUp "1"]
[FEN "rn3r1k/pn1p1ppq/bpp4p/7P/4N1Q1/6P1/PP3PB1/R1B1R1K1 w - -"]

1. Bg5 f5 2. Qf4 hxg5 3. Nxg5 Qxh5 {4. g4!}*


Image
rn3r1k/pn1p2p1/bpp5/5pNq/5Q2/6P1/PP3PB1/R3R1K1 w - -

(Kaissa still needs about depth 27 in the diagram position to find 4. g4 or 4. Bf3. And that is with Multi PV = 15 :?)

Engine: Kaissa III (512 MB)
by T. Romstad, M. Costalba, J. Kiiski, G. Linscott

33 201:24 +1.80 4.g4 Qg6 (28.957.594.395) 2396

33 201:24 +1.70 4.Bf3 Qg6 5.Kg2 Kg8 6.Rad1 Nc5 7.Rd6 (28.957.594.395) 2396

33 201:24 +0.13 4.Rad1 Kg8 5.Bf3 Qh6 6.b4 Bb5 7.Kg2 Nd8
8.Rd6 Ne6 9.Rdxe6 dxe6 10.Rxe6 g6
11.Qd6 Kg7 (28.957.594.395) 2396

33 201:24 0.00 4.Re7 Nc5 5.Bf3 Qg6 6.Kg2 Kg8 7.Rh1 Ne6
8.Qh4 Qh6 9.Qxh6 gxh6 10.Nxe6 dxe6
11.Rxh6 Bc4 12.b3 Bd5 13.Bxd5 exd5
14.Rg6+ Kh8 (28.957.594.395) 2396

33 201:24 -1.38 4.Rac1 Kg8 5.Bf3 Qh6 6.Kg2 Nd8 7.Re5 d6
8.Rxf5 Nd7 9.Rxf8+ Nxf8 10.Bxc6 Nfe6
11.Nxe6 Qxf4 12.gxf4 Nxc6 13.Nc7 Bb7
14.Nxa8 Bxa8 15.Kh3 Kf7 16.Kg3 Bb7
17.Kg4 (28.957.594.395) 2396

33 201:24 -1.52 4.Re5 Kg8 5.Bf3 Qg6 6.Kg2 Nd8 7.Rh1 Ne6 (28.957.594.395) 2396

33 201:24 -1.59 4.b4 Nd8 5.Bf3 Qg6 6.Re7 Ne6 7.Nxe6 dxe6
8.Re1 Bc8 9.Qh4+ Kg8 10.b5 a5
11.bxc6 Na6 12.Rd1 Qh6 13.Qxh6 gxh6
14.a4 Nc5 15.c7 (28.957.594.395) 2396

33 201:24 -2.52 4.a4 Kg8 (28.957.594.395) 2396

33 201:24 -2.75 4.b3 Kg8 5.Bf3 Qh6 6.Kg2 Nc5 7.Rh1 Ne6
8.Nxe6 Qxe6 9.Qh4 Qh6 10.Qxh6 gxh6
11.Rxh6 Bd3 12.Rah1 Be4 13.Bxe4 fxe4
14.R1h4 (28.957.594.395) 2396

33 201:24 -2.75 4.Bh1 Kg8 5.Bf3 Qh6 6.Kg2 Nc5 7.Rh1 Ne6
8.Nxe6 Qxe6 9.Qh4 Qh6 10.Qxh6 gxh6
11.Rxh6 (28.957.594.395) 2396

33 201:24 -2.86 4.Rab1 Nc5 5.Bf3 Qg6 6.Kg2 Kg8 7.Rh1 Ne6
8.Nxe6 Qxe6 9.Qh4 Qh6 10.Qb4 Qf6
11.Rh5 d5 12.Rbh1 Kf7 13.Qf4 Ke8
14.Rh7 (28.957.594.395) 2396

33 201:24 -2.89 4.a3 Kg8 5.Bf3 Qh6 6.Rad1 (28.957.594.395) 2396

33 201:24 -3.00 4.Be4 Kg8 (28.957.594.395) 2396

32 201:24 -3.05 4.Red1 Qg4 5.Qe3 (28.957.594.395) 2396

32 201:24 -3.99 4.Re3 Qg4 5.Qxg4 fxg4 (28.957.594.395) 2396

best move: g3-g4 time: 201:24.344 min n/s: 2.396.292 nodes: 28.957.594.395

Was Alpha Zeto trained on this, probably not but you really have to search very Monte Carlo like to find this... Of course we have been spoilt with all the Late Move Reductions since Tord and Fabien, just to name two famous programmers, popularized it somewhere before Rybka 1.0. Fritz 10 probably is much better than Stockfish.
Debugging is twice as hard as writing the code in the first
place. Therefore, if you write the code as cleverly as possible, you
are, by definition, not smart enough to debug it.
-- Brian W. Kernighan

mjlef
Posts: 1432
Joined: Thu Mar 30, 2006 12:08 pm
Contact:

Re: Komodo 12.1

Post by mjlef » Tue Jun 05, 2018 2:09 pm

Eelco de Groot wrote:
Tue Jun 05, 2018 1:41 pm
by Werewolf » Tue Jun 05, 2018 11:10 am
The issue is whether scaling of MCTS has inherent properties - like alpha beta did for ages before Lazy - where we can predict with confidence what the efficiency will be on N threads and how hard it gets as N rises. If it turns out that at 12 it begins to drop - and that is inherent with this method (rather than a by-product of a new idea the Komodo team are trying) - it makes a huge difference.

I'm interested in knowing if long-term we can do better than N^ 0.8 etc.
Without any theory, I'd say it is some random walk. Monte Carlo with 12 threads is the same as one thread searching twelve times longer? It probably scales well to at least 64 threads because I believe that was what Alpha Zero was using. Duplicate games with almost random moves will be rare in chess. But Stockfish, without a book, on 64 threads Lazy Eval was still very predictable in the opening. I am still suspicious of positions like

[Event "?"]
[Site "?"]
[Date "2018.06.05"]
[Round "?"]
[White "?"]
[Black "?"]
[Result "*"]
[SetUp "1"]
[FEN "rn3r1k/pn1p1ppq/bpp4p/7P/4N1Q1/6P1/PP3PB1/R1B1R1K1 w - -"]

1. Bg5 f5 2. Qf4 hxg5 3. Nxg5 Qxh5 {4. g4!}*


Image
rn3r1k/pn1p2p1/bpp5/5pNq/5Q2/6P1/PP3PB1/R3R1K1 w - -

(Kaissa still needs about depth 27 in the diagram position to find 4. g4 or 4. Bf3. And that is with Multi PV = 15 :?)

Engine: Kaissa III (512 MB)
by T. Romstad, M. Costalba, J. Kiiski, G. Linscott

33 201:24 +1.80 4.g4 Qg6 (28.957.594.395) 2396

33 201:24 +1.70 4.Bf3 Qg6 5.Kg2 Kg8 6.Rad1 Nc5 7.Rd6 (28.957.594.395) 2396

33 201:24 +0.13 4.Rad1 Kg8 5.Bf3 Qh6 6.b4 Bb5 7.Kg2 Nd8
8.Rd6 Ne6 9.Rdxe6 dxe6 10.Rxe6 g6
11.Qd6 Kg7 (28.957.594.395) 2396

33 201:24 0.00 4.Re7 Nc5 5.Bf3 Qg6 6.Kg2 Kg8 7.Rh1 Ne6
8.Qh4 Qh6 9.Qxh6 gxh6 10.Nxe6 dxe6
11.Rxh6 Bc4 12.b3 Bd5 13.Bxd5 exd5
14.Rg6+ Kh8 (28.957.594.395) 2396

33 201:24 -1.38 4.Rac1 Kg8 5.Bf3 Qh6 6.Kg2 Nd8 7.Re5 d6
8.Rxf5 Nd7 9.Rxf8+ Nxf8 10.Bxc6 Nfe6
11.Nxe6 Qxf4 12.gxf4 Nxc6 13.Nc7 Bb7
14.Nxa8 Bxa8 15.Kh3 Kf7 16.Kg3 Bb7
17.Kg4 (28.957.594.395) 2396

33 201:24 -1.52 4.Re5 Kg8 5.Bf3 Qg6 6.Kg2 Nd8 7.Rh1 Ne6 (28.957.594.395) 2396

33 201:24 -1.59 4.b4 Nd8 5.Bf3 Qg6 6.Re7 Ne6 7.Nxe6 dxe6
8.Re1 Bc8 9.Qh4+ Kg8 10.b5 a5
11.bxc6 Na6 12.Rd1 Qh6 13.Qxh6 gxh6
14.a4 Nc5 15.c7 (28.957.594.395) 2396

33 201:24 -2.52 4.a4 Kg8 (28.957.594.395) 2396

33 201:24 -2.75 4.b3 Kg8 5.Bf3 Qh6 6.Kg2 Nc5 7.Rh1 Ne6
8.Nxe6 Qxe6 9.Qh4 Qh6 10.Qxh6 gxh6
11.Rxh6 Bd3 12.Rah1 Be4 13.Bxe4 fxe4
14.R1h4 (28.957.594.395) 2396

33 201:24 -2.75 4.Bh1 Kg8 5.Bf3 Qh6 6.Kg2 Nc5 7.Rh1 Ne6
8.Nxe6 Qxe6 9.Qh4 Qh6 10.Qxh6 gxh6
11.Rxh6 (28.957.594.395) 2396

33 201:24 -2.86 4.Rab1 Nc5 5.Bf3 Qg6 6.Kg2 Kg8 7.Rh1 Ne6
8.Nxe6 Qxe6 9.Qh4 Qh6 10.Qb4 Qf6
11.Rh5 d5 12.Rbh1 Kf7 13.Qf4 Ke8
14.Rh7 (28.957.594.395) 2396

33 201:24 -2.89 4.a3 Kg8 5.Bf3 Qh6 6.Rad1 (28.957.594.395) 2396

33 201:24 -3.00 4.Be4 Kg8 (28.957.594.395) 2396

32 201:24 -3.05 4.Red1 Qg4 5.Qe3 (28.957.594.395) 2396

32 201:24 -3.99 4.Re3 Qg4 5.Qxg4 fxg4 (28.957.594.395) 2396

best move: g3-g4 time: 201:24.344 min n/s: 2.396.292 nodes: 28.957.594.395

Was Alpha Zeto trained on this, probably not but you really have to search very Monte Carlo like to find this... Of course we have been spoilt with all the Late Move Reductions since Tord and Fabien, just to name two famous programmers, popularized it somewhere before Rybka 1.0. Fritz 10 probably is much better than Stockfish.
Just for info, Komodo 12.1.1 on 2 thread finds g4 in under 3 seconds.

User avatar
Eelco de Groot
Posts: 4210
Joined: Sun Mar 12, 2006 1:40 am
Location: Groningen

Re: Komodo 12.1

Post by Eelco de Groot » Tue Jun 05, 2018 2:11 pm

That is good to hear Mark!
Debugging is twice as hard as writing the code in the first
place. Therefore, if you write the code as cleverly as possible, you
are, by definition, not smart enough to debug it.
-- Brian W. Kernighan

mjlef
Posts: 1432
Joined: Thu Mar 30, 2006 12:08 pm
Contact:

Re: Komodo 12.1

Post by mjlef » Tue Jun 05, 2018 2:14 pm

Uri Blass wrote:
Tue Jun 05, 2018 5:15 am
Branko Radovanovic wrote:
Mon Jun 04, 2018 10:09 am
I just have to ask: are there perhaps plans to enter Komodo MCTS into TCEC 13? My understanding is that TCEC rules would actually permit entering both classic and MCTS versions of Komodo, as the two are fundamentally different, i.e. the MCTS version can not be considered a "derivative" of classic Komodo. It is already very competitive, which is remarkable, so I would really like to see it in action - and I'm certainly not alone.
My understanding is that komodo MCT is a derivative of classic Komodo.

I do not understand how people can think different when probably more than 90% of the chess code is the same.
Note that I have no problem if tournaments allow derivatives to compete.
I completely agree. Over time we might find we need to change the eval and search a lot and they might begin to differ more over time. But they certainly will remain derivatives without some massive total program rewrite. In the Leela case, I believe they use Stockfish code for board representation, and move generation, but the "evaluation: is from the neural network. More different than Komodo to Komodo MCTS, but much of the code is the same so it is also a derivative. It would be nice to establish rules to determine when two programs are too similar to be in the same tournament. Is a vastly different evaluation enough?

Albert Silver
Posts: 2890
Joined: Wed Mar 08, 2006 8:57 pm
Location: Rio de Janeiro, Brazil

Re: Komodo 12.1

Post by Albert Silver » Tue Jun 05, 2018 2:45 pm

mjlef wrote:
Tue Jun 05, 2018 2:14 pm
Uri Blass wrote:
Tue Jun 05, 2018 5:15 am
Branko Radovanovic wrote:
Mon Jun 04, 2018 10:09 am
I just have to ask: are there perhaps plans to enter Komodo MCTS into TCEC 13? My understanding is that TCEC rules would actually permit entering both classic and MCTS versions of Komodo, as the two are fundamentally different, i.e. the MCTS version can not be considered a "derivative" of classic Komodo. It is already very competitive, which is remarkable, so I would really like to see it in action - and I'm certainly not alone.
My understanding is that komodo MCT is a derivative of classic Komodo.

I do not understand how people can think different when probably more than 90% of the chess code is the same.
Note that I have no problem if tournaments allow derivatives to compete.
I completely agree. Over time we might find we need to change the eval and search a lot and they might begin to differ more over time. But they certainly will remain derivatives without some massive total program rewrite. In the Leela case, I believe they use Stockfish code for board representation, and move generation, but the "evaluation: is from the neural network. More different than Komodo to Komodo MCTS, but much of the code is the same so it is also a derivative. It would be nice to establish rules to determine when two programs are too similar to be in the same tournament. Is a vastly different evaluation enough?
You are saying that sharing the move generator and board representation makes it a clone? Even if the search and eval are all so different as to be incomparable?
"Tactics are the bricks and sticks that make up a game, but positional play is the architectural blueprint."

Post Reply