Komodo MCTS

Discussion of anything and everything relating to chess playing software and machines.

Moderators: hgm, Rebel, chrisw

cma6
Posts: 219
Joined: Thu May 29, 2014 5:58 pm

Re: Komodo MCTS

Post by cma6 »

Thanks, Mark, for the tips.
"set Syzygy Probe Limit to whatever size Syzygy files you have." But that is confusing. I have the 5-man and 6-man Syzygy files, so would that mean Probe Limit = 6?
User avatar
Master Om
Posts: 449
Joined: Wed Nov 24, 2010 10:57 am
Location: INDIA

Re: Komodo MCTS

Post by Master Om »

Laskos wrote: Fri Jun 07, 2019 5:43 pm
Master Om wrote: Fri Jun 07, 2019 5:22 pm Till now I have found its good for nothing. I am yet to find positions where Komodo is better to SF in any aspect. MCTS appears to be good on paper but isn't working in komodo atleast. It didn't help me in my CC games analysis till now. Still am using it if i could find any.
Did you try Leela in CC games analysis? How do you find it?
Yes. Its impressive. I have only GTX 1050 Ti. Hence its little bit slow. Said that It suggest moves at depth 15 what SF sees at depth 40.
Always Expect the Unexpected
mjlef
Posts: 1494
Joined: Thu Mar 30, 2006 2:08 pm

Re: Komodo MCTS

Post by mjlef »

cma6 wrote: Sun Jun 09, 2019 5:26 pm Thanks, Mark, for the tips.
"set Syzygy Probe Limit to whatever size Syzygy files you have." But that is confusing. I have the 5-man and 6-man Syzygy files, so would that mean Probe Limit = 6?
You should set Syzygy Probe Limit to 6 since you have 6 man Syzygy files.
peter
Posts: 3185
Joined: Sat Feb 16, 2008 7:38 am
Full name: Peter Martan

Re: Komodo MCTS

Post by peter »

Hi Mark!
mjlef wrote: Sun Jun 09, 2019 10:29 pm
Any plans to implement position- learning to komodo (MCTS)?

Works fine for me in StockLearn, which derived from an idea of Kelly Kinyama, as far as I know, has something to do with kind of MCTS too.

Marco Zerbinati (SugaR_MCTS) and Andrea Manzo (ShashChess and StockLearn) gave GPL- Code of own SF- branches to be found on github easily.
Peter.
carldaman
Posts: 2283
Joined: Sat Jun 02, 2012 2:13 am

Re: Komodo MCTS

Post by carldaman »

peter wrote: Tue Jun 11, 2019 7:48 pm Hi Mark!
mjlef wrote: Sun Jun 09, 2019 10:29 pm
Any plans to implement position- learning to komodo (MCTS)?

Works fine for me in StockLearn, which derived from an idea of Kelly Kinyama, as far as I know, has something to do with kind of MCTS too.

Marco Zerbinati (SugaR_MCTS) and Andrea Manzo (ShashChess and StockLearn) gave GPL- Code of own SF- branches to be found on github easily.
+1
I second the motion - it would make Komodo a more attractive choice in this era of SF/LC0 and a better and more useful engine overall. :)
mjlef
Posts: 1494
Joined: Thu Mar 30, 2006 2:08 pm

Re: Komodo MCTS

Post by mjlef »

peter wrote: Tue Jun 11, 2019 7:48 pm Hi Mark!
mjlef wrote: Sun Jun 09, 2019 10:29 pm
Any plans to implement position- learning to komodo (MCTS)?

Works fine for me in StockLearn, which derived from an idea of Kelly Kinyama, as far as I know, has something to do with kind of MCTS too.

Marco Zerbinati (SugaR_MCTS) and Andrea Manzo (ShashChess and StockLearn) gave GPL- Code of own SF- branches to be found on github easily.
Past learning schemes (like the one I had in my program NOW) basically were schemes to save important search results, that would then reload into the hash table so in future games it would "lean" to avoid certainly lines, or encourage other better performing lines. It was a bit helpful in a tournament since it would help prevent an opponent from booking up and using lines the program found against it. The NN programs learn during training and not during regular play. I have not looked at the programs you mention, but will take a look when I have a break.

Mark
peter
Posts: 3185
Joined: Sat Feb 16, 2008 7:38 am
Full name: Peter Martan

Re: Komodo MCTS

Post by peter »

mjlef wrote: Wed Jun 12, 2019 5:30 am Past learning schemes (like the one I had in my program NOW) basically were schemes to save important search results, that would then reload into the hash table so in future games it would "lean" to avoid certainly lines, or encourage other better performing lines. It was a bit helpful in a tournament since it would help prevent an opponent from booking up and using lines the program found against it.
As for the reload of hash like already possible with komodo, you cannot reload it in automatic tournaments. Getting the engine started by the GUI deletes the reloaded hash.
It works with Houdini but it doesn't with komodo.
If you mean book- learning it's simply a GUI- feature and a matter of the book used.

Storing the whole hash of previous analysis takes much disk- space and time with little input for the next game, and it depends mainly on the moment and position when and where hash is saved and reloaded. Simply storing the hash at the end of a game and reloading it at the start of the next one, would give no useful information for the new starting position, stored of the ending of the last one game.

Learning-files like the ones Shredder, Hiarcs, Houdini (up to version 4), SF_PA and now StockLearn had or have, are much smarter to me.

And then as said, MCTS seems to offer new ways of engine- learning.
Peter.
mjlef
Posts: 1494
Joined: Thu Mar 30, 2006 2:08 pm

Re: Komodo MCTS

Post by mjlef »

peter wrote: Tue Jun 11, 2019 7:48 pm Hi Mark!
mjlef wrote: Sun Jun 09, 2019 10:29 pm
Any plans to implement position- learning to komodo (MCTS)?

Works fine for me in StockLearn, which derived from an idea of Kelly Kinyama, as far as I know, has something to do with kind of MCTS too.

Marco Zerbinati (SugaR_MCTS) and Andrea Manzo (ShashChess and StockLearn) gave GPL- Code of own SF- branches to be found on github easily.
ShashChess seems to just save each final search result as a hash entry that goes into a disk file. Well, at least two disk files (one is used just for K&P endings). The "learning" I see is just saving previous search results, so it would only help should those positions be found in future searches. Basically it appears to be like my old NOW scheme, although split into two files based on material. I only spent a few minutes looking so there may be other things there also I did not notice. These schemes can influence future move choices, but are not general learning like NN MCTS programs can do.
peter
Posts: 3185
Joined: Sat Feb 16, 2008 7:38 am
Full name: Peter Martan

Re: Komodo MCTS

Post by peter »

mjlef wrote: Wed Jun 12, 2019 5:01 pm
peter wrote: Tue Jun 11, 2019 7:48 pm Hi Mark!
mjlef wrote: Sun Jun 09, 2019 10:29 pm
Any plans to implement position- learning to komodo (MCTS)?

Works fine for me in StockLearn, which derived from an idea of Kelly Kinyama, as far as I know, has something to do with kind of MCTS too.

Marco Zerbinati (SugaR_MCTS) and Andrea Manzo (ShashChess and StockLearn) gave GPL- Code of own SF- branches to be found on github easily.
ShashChess seems to just save each final search result as a hash entry that goes into a disk file. Well, at least two disk files (one is used just for K&P endings). The "learning" I see is just saving previous search results, so it would only help should those positions be found in future searches. Basically it appears to be like my old NOW scheme, although split into two files based on material. I only spent a few minutes looking so there may be other things there also I did not notice. These schemes can influence future move choices, but are not genral learning like NN MCTS programs can do.
Thanks for looking into code. Did you notice, the engine has to have the full move history of positions to be "learned"? If opening moves leading to a position are to be read by engine, there should be 8 .bin-files written to disk for every opening shorter than about 7 moves.
Experience.bin itself does contain useful infos only, if other .bin- files appear at disk also. As far as I understand the idea, there are connections between positions along the lines from opening to endgame evaluated and stored. Kelly Kinyamas ideas behind the principal method are to be found at github and fishcooking too.
BTW, of course I do know, that any engine- learning like this does't have anything in common with the one using neuronal networks, yet it seems a lot less hardware-time consuming as for certain supervised learning.
Peter.
marsell
Posts: 106
Joined: Tue Feb 07, 2012 11:14 am

Re: Komodo MCTS

Post by marsell »

The engine BrainLearn from Kelly Kinyama produces significantly more files than 8. Nearly 900
What am I doing wrong??