a crying shame (re: self-learning engines)

Discussion of anything and everything relating to chess playing software and machines.

Moderators: hgm, Rebel, chrisw

dkappe
Posts: 1631
Joined: Tue Aug 21, 2018 7:52 pm
Full name: Dietrich Kappe

Re: a crying shame (re: self-learning engines)

Post by dkappe »

Ovyron wrote: Tue Jan 21, 2020 7:09 am Yeah, I don't think dkappe understands the kind of learning we're talking about.

...Then the user does some "reverse analysis", where previous nodes are visited. Leela would use what it has learned from the future position to have a more accurate score of the line, and would either remain showing this learned score for the position for previous moves (without "searching" it, so it's done very fast), or would switch moves to a better one for the sides (because it now scores better), so the user would need to go forward in this one and repeat the process...So this learning is about getting much faster results by interactively analyzing the positions and letting the engine learn line refutations (by, say, showing it lines where Stockfish knows the best moves) in a specific line, a very different process than NN learning.
I understand and because I understand I don’t see this sort of learning as being anything but fiendishly difficult with mcts engines.

The search is very different (ignore the NN for the moment), so the kind of hash tricks available to ab engines aren’t available to mcts engines. MCTS engines don’t explore a tree one very deep line at a time via depth first search, they build and keep in memory a partial tree with backup calculations. That’s why a hash isn’t so easy, as if we get a “better” eval for a position, we would have to find where in the tree that position occurs and redo the backup calculations (and possibly ad infinitum if that improves some other position). There are a few academic papers that try to address this issue, but it’s still very much an open question.

I’m not saying that some real-time learning isn’t possible in the mcts search (again, leaving aside the NN), but you can’t just run the decades old ab game plan.
Fat Titz by Stockfish, the engine with the bodaciously big net. Remember: size matters. If you want to learn more about this engine just google for "Fat Titz".
corres
Posts: 3657
Joined: Wed Nov 18, 2015 11:41 am
Location: hungary

Re: a crying shame (re: self-learning engines)

Post by corres »

Ovyron wrote: Mon Feb 10, 2020 9:14 am Yeah, but when you're analyzing a game, you want an engine that knows what's going on on the current position. The position where you'd want to make a move, or find the best possible continuation. And it's here that continuous learning eventually catches up to better search or evaluation.
Who cares about the ELO of the engine on the rest of chess? The only thing that matters is this position you're analyzing, and its mainline (the best moves that the sides can play from here), if you can find it. ELO is an average of what you're expected to get on the current position, and indeed, Stockfish is best in most, but if with continuous learning you can get there faster, sometimes much faster, and sometimes it's the only way yo get there, then you'd rather want to use an engine with this feature instead of one with higher ELO that lacks it.
If you are analyzing more than one time the same position the position learning will help you.
But if you do not know what position you should analyze in the next time you would store huge number of analyzed positions for getting chance for helping.
A weak engine with low Elo number can not give you important help for learning and/or analyzing.
So the Elo of used engine is important.
User avatar
Ovyron
Posts: 4556
Joined: Tue Jul 03, 2007 4:30 am

Re: a crying shame (re: self-learning engines)

Post by Ovyron »

corres wrote: Mon Feb 10, 2020 12:54 pm A weak engine with low Elo number can not give you important help for learning and/or analyzing.
So the Elo of used engine is important.
Stockfish TCEC6 PA GTB which has learning plays at some 700 elo weaker than Stockfish 11, but holds its own on interactive analysis when you match them on a single position (if you played games against one another, the learning would have no effect, but if you're analyzing a position, you know in advance the positions that need to be learned, as they're the ones that come from this one.)

It seems you haven't actually tried it and are just guessing about what would happen, assuming there's no way 700 ELO can be overcome with learning.

You can download Learning Stockfish from here:

https://open-chess.org/viewtopic.php?t=2663

And check how powerful learning is, because it doesn't matter how better search and eval have gotten on Stockfish 11, after you show it the moves, it'll have learned them, and will stay with it even if you unload the engine, and then you don't need a big Transposition Table at all.
corres
Posts: 3657
Joined: Wed Nov 18, 2015 11:41 am
Location: hungary

Re: a crying shame (re: self-learning engines)

Post by corres »

Ovyron wrote: Mon Feb 10, 2020 2:13 pm
corres wrote: Mon Feb 10, 2020 12:54 pm A weak engine with low Elo number can not give you important help for learning and/or analyzing.
So the Elo of used engine is important.
Stockfish TCEC6 PA GTB which has learning plays at some 700 elo weaker than Stockfish 11, but holds its own on interactive analysis when you match them on a single position (if you played games against one another, the learning would have no effect, but if you're analyzing a position, you know in advance the positions that need to be learned, as they're the ones that come from this one.)
It seems you haven't actually tried it and are just guessing about what would happen, assuming there's no way 700 ELO can be overcome with learning.
You can download Learning Stockfish from here:
https://open-chess.org/viewtopic.php?t=2663
And check how powerful learning is, because it doesn't matter how better search and eval have gotten on Stockfish 11, after you show it the moves, it'll have learned them, and will stay with it even if you unload the engine, and then you don't need a big Transposition Table at all.
I know that Stockfish.
In some special cases you may be right.
But in general the evaluation of a weak engine is also weak and you can check its analysis with a more stronger one.
carldaman
Posts: 2283
Joined: Sat Jun 02, 2012 2:13 am

Re: a crying shame (re: self-learning engines)

Post by carldaman »

Yep, Uly's right, the most important positions are those that interest the user.

The fact that Leela or FF's score does not propagate backwards is a serious drawback and makes the engine that much more useless for analysis. One more reason for it to have a learning file, certainly, but does it use normal hash tables? Could the score not propagating be a bug (or a feature?), or is it due to what dkappe mentioned, mainly the MCTS search and how it stores its tree?

I bought FatFritz+Ginkgo because I was curious, and interested mainly in the playing style. Right now, I like Ginkgo/Fritz17 better than FF, which is a let-down for me. I feel I deserved more for the money spent. Next time I may vote with my wallet, but it won't matter. The typical buyer is not that educated. Good analysis features matter only to a select few sophisticated users, unfortunately.