Chess Training: Human Brain v NN

Discussion of anything and everything relating to chess playing software and machines.

Moderators: hgm, Rebel, chrisw

User avatar
towforce
Posts: 11588
Joined: Thu Mar 09, 2006 12:57 am
Location: Birmingham UK

Re: Chess Training: Human Brain v NN

Post by towforce »

DrCliche wrote: Mon Aug 30, 2021 1:23 amLook up ML papers on contrastive learning.

You want to use a network to encode chess positions into a (much smaller) latent space, where the loss function has a term that penalizes the distance between encodings of similar positions. (It is up to you to determine how you want to define "similar", though of course there are unsupervised methods.)

A second network can then evaluate a given chess position using its latent vector as input, rather than (or in addition to) the chess position itself. Here you would have a normal loss function that simply encourages evaluation accuracy, plus whatever regularization terms you find aid generalization.

You can either use strong engine evaluations as a training target (this is what Stockfish does), or use search to allow the evaluation function to essentially bootstrap itself, comparing one node evaluations against aggregated 800 node evaluations or whatever (this is what Leela does).

If you train both the encoder and the evaluator in tandem, you will likely end up with a system that "understands" chess positions in a more "humanlike" way. (And though not necessary for the functioning of a chess engine, you may find the encoder trains more stably if you also simultaneously train a decoder, whose job is to take a position's latent vector as input and output the actual chess position, or possible moves, or whatever.)

That's a good answer: thank you for taking the time to write it!

In order to focus on just the exact thing that we're talking about (learning methods), I'm going to temporarily talk about images instead of chess positions: a dog and a cat look roughly the same. Humans and ANNs can tell them apart. ANNs can do it because they've been trained on a large dataset. Humans can tell things apart without having been trained on a large dataset. I've managed, on my first attempt, to find a case where image search fails:

People who dress well - link.

People who don't dress well - link

Many of the same images appear on both searches, so that's a fail!

When an image classifier is trained to differentiate cats from dogs by training on a large number of images, I am guessing that this doesn't confer any skill in differentiating horses from cows. Relating that to what you said: knowing how to differentiate "good" positions from similar "bad" positions in one type of position won't necessarily confer any skill in differentiating "good" positions from "bad" positions in a different type of position - and I think you'll agree that the number of position types in chess is absolutely massive.

Given that the threshold for becoming a GM is "only" good knowledge of fifty thousand different chess patterns, while, as I said before, your answer is good, I don't think it's going to deliver what we would want: a GM level evaluation of most positions at ply 1.

My preference would be to learn how to evaluate a chess position. I'm thinking that one approach to achieve this might be as follows (first attempt - obviously there's going to be room for improvement!):

* create a set of evaluation components (add to this as necessary)

* get a set of chess positions with "reasonably accurate" evaluations

* the ML's job it to pick a subset of evaluation components that give the correct score (or "close enough" to it)

* optimising of two things tends to be more awkward than optimising on one thing - so here's the twist: optimise on minimising the number of evaluation components used (maximising the simplicity of the resulting EF), using the correct evaluation as a CONSTRAINT to the optimisation - not the TARGET of it

* a bit more nebulous, but then include in the optimisation reward for building similar EFs for similar types of position

The end result would be an NN that could do a good job of selecting EF components (and hence build a good EF) for various types of position. This probably isn't quite what humans do, but trying to do what humans do isn't usually the best way to get a machine to display intelligent behaviour.
Writing is the antidote to confusion.
It's not "how smart you are", it's "how are you smart".
Your brain doesn't work the way you want, so train it!
DrCliche
Posts: 65
Joined: Sun Aug 19, 2018 10:57 pm
Full name: Nickolas Reynolds

Re: Chess Training: Human Brain v NN

Post by DrCliche »

Modern machine learning methods more or less already do that. The "reasoning" process that a network follows is just very difficult for humans to unravel. But learning to pay attention to the best subset of features is basically what neural network training is.

Anyway, trying to make smaller and less feature-rich networks, or to somehow narrow down which features can or will be considered is the wrong approach if you want few-shot learning and strong generalization.

The best few-shot learners are massive models that learn to recognize features or computational principles that apply generally across a large number of domains. Large language models are particularly good at this. In effect, learning something as complicated and rich as human language seems to encapsulate the idea of learning how to learn, to some extent.

So could you turn GPT-3 (175b) into a pretty good chess evaluation function simply by fine-tuning it on 50,000 "canonical grandmaster patterns" or whatever? I bet you could! Inference on a model that large would be really slow, though, no doubt much too slow to challenge top computers.
User avatar
towforce
Posts: 11588
Joined: Thu Mar 09, 2006 12:57 am
Location: Birmingham UK

Re: Chess Training: Human Brain v NN

Post by towforce »

I am guessing you are talking about tuning it by hand? Given that each time you changed the EF, you'd have to go back and check that it still evaluated the previously tuned positions correctly, the number of tuning operations you'd have to do is the sum of the integers from one to fifty thousand, which is 1,250,025,000

Those fifty thousand positions are going to be nowhere near enough for an NN doing a single ply evaluation. It's not enough to have seen a similar position and its evaluation: a deep understanding of the position is needed. The reason for this is that a small change to a position can make a difference to the outcome of the game. My suggestion to overcome this issue is for the NN to determine how a position should be evaluated rather than to directly evaluate it. My suggestion for how it could do this is by selecting which components of an evaluation function to use.
Writing is the antidote to confusion.
It's not "how smart you are", it's "how are you smart".
Your brain doesn't work the way you want, so train it!
User avatar
towforce
Posts: 11588
Joined: Thu Mar 09, 2006 12:57 am
Location: Birmingham UK

Re: Chess Training: Human Brain v NN

Post by towforce »

I just thought about driving: humans, including stupid ones (you may put me into that category if your personality flaws oblige you to do so :) ) are able to drive well in a wide variety of conditions. Computers are not. A lot more money has been spent trying to teach computers to drive than has been spent teaching computers to play chess (counting amateur development as "free").

This tells me that humans have underlying skills that are difficult to get. I'm happy to speculate as to what these might be if anyone is interested. Some of these underlying skills probably apply to chess as well - though it does take a long time to train a human to beat Lc0 at ply 1.

To be fair, computers have better accident rates than humans, but IMO it's worth asking: why is a human able to drive well in a wide range of conditions after a few months training, whereas Google have been trying to teach computers to do this for many years?
Writing is the antidote to confusion.
It's not "how smart you are", it's "how are you smart".
Your brain doesn't work the way you want, so train it!
User avatar
towforce
Posts: 11588
Joined: Thu Mar 09, 2006 12:57 am
Location: Birmingham UK

Re: Chess Training: Human Brain v NN

Post by towforce »

I have just read an enlightening article on the subject of driving: when a child is around 7 months old, they know about object permanence, so if you hide a toy under a blanket, they will know it's still there, and be able to reach under the blanket to get it.

A self-driving car apparently doesn't have this skill: for them, if a bicycle is momentarily hidden by a passing van, the car's understanding is that the bicycle no longer exists.

I think it is very likely that there are likewise human skills that apply to chess that are so obvious to us that we don't even realise that we have the skill, and hence are completely unaware that a net trained to play chess doesn't have that skill.
Writing is the antidote to confusion.
It's not "how smart you are", it's "how are you smart".
Your brain doesn't work the way you want, so train it!
Uri Blass
Posts: 10299
Joined: Thu Mar 09, 2006 12:37 am
Location: Tel-Aviv Israel

Re: Chess Training: Human Brain v NN

Post by Uri Blass »

towforce wrote: Sun Aug 29, 2021 11:40 am I believe that chess NNs are building a large number of simple (shallow) patterns, whereas top humans are generating a small number of deep (complicated) patterns. I know, and accept, that some people disagree with this view, and that I personally cannot prove it.

Anyway, nobody will dispute that what humans are doing to learn good chess is very different from what NNs are doing. Here are some key facts:

* NNs are being trained using a number of positions that is many orders of magnitude larger than any human has ever seen

* top humans are still better than NNs at static (ply 1) evaluations

* it is sometimes said that the biggest difference between strong and weak players is in the endgame. Here's NNs are still relatively weak in static evaluations

So... what are top humans doing differently from NNs to enable them to still be better at static evaluations?

Here's what I think: having a really good, long look at a small number of positions is enabling their brains to uncover important patterns that the NNs cannot. Billions of positions looks like a lot of information, but in comparison with the number of possible chess positions, it's actually quite tiny.

It also reminds me of something important about chess: small differences in the position can make a big difference to the correct evaluation of it: an NN might have seen a similar position in training - but a small difference could make that evaluation wrong

As well as more complex patterns, another skill that the human behaviour of studying a small number of positions in depth might yield is the ability to know what factors are likely to determine what "small differences" are likely to make the difference between the current position being won or drawn/lost. If so, then it's not immediately obvious to me how we're going to train that skill into NNs.
I think that the following is your opinion and not a clear fact:
"top humans are better than NNs at static(ply 1) evaluations"

I do not know how to measure 1 ply evaluations of top humans.
Considering only blitz or even only bullet games does not help.

1)Top humans search even in bullet games and do not use only static evaluation
2)Top humans may avoid some good move not because of bad evaluation but simply because they do not think about the move and do not evaluate the position after the move.
User avatar
towforce
Posts: 11588
Joined: Thu Mar 09, 2006 12:57 am
Location: Birmingham UK

Re: Chess Training: Human Brain v NN

Post by towforce »

Uri Blass wrote: Tue Sep 07, 2021 5:29 amI think that the following is your opinion and not a clear fact:
"top humans are better than NNs at static(ply 1) evaluations"

I do not know how to measure 1 ply evaluations of top humans.
Considering only blitz or even only bullet games does not help.

1)Top humans search even in bullet games and do not use only static evaluation
2)Top humans may avoid some good move not because of bad evaluation but simply because they do not think about the move and do not evaluate the position after the move.

I agree: it's not a "clear fact", it's a "simplification".

It is true that strong players can look at a position and very quickly make a very good assessment, and it's also true that in games they look ahead. However:

1. Out of the trillions of options, they quickly make a very good assessment about where to look ahead

2. They don't search like computers, which build a very big and relatively accurate game tree from the current position, so in comparison the human look ahead is tiny
Writing is the antidote to confusion.
It's not "how smart you are", it's "how are you smart".
Your brain doesn't work the way you want, so train it!
PK
Posts: 893
Joined: Mon Jan 15, 2007 11:23 am
Location: Warsza

Re: Chess Training: Human Brain v NN

Post by PK »

Human patterns are not deeper, they just come with some blurry lookahead. The last example I learned is: both sides castled short, pawn covers intact, plenty of pieces on the board, unobstructed black bishop on e7 and some vague attack chances. 1.h4 Bxh4? 2.g3 Be7 3.Kg2 with Rh1 coming.
User avatar
MikeB
Posts: 4889
Joined: Thu Mar 09, 2006 6:34 am
Location: Pen Argyl, Pennsylvania

Re: Chess Training: Human Brain v NN

Post by MikeB »

Good to see a good chess thread here once in while.
+1
Image