LMR - lack of Alpha-Eval check?

Discussion of chess software programming and technical issues.

Moderators: hgm, Rebel, chrisw

User avatar
Rebel
Posts: 6991
Joined: Thu Aug 18, 2011 12:04 pm

LMR - lack of Alpha-Eval check?

Post by Rebel »

In my LMR code I have:

Code: Select all

if (eval_score > alpha) reduce less
When I remove the code I get a speed-up of 20-25% in depth but in selfplay it loses 5-6 elo.

Now I have changed the code into:

Code: Select all

if (eval_score - SEE > alpha) reduce less  // SEE is the highest hanging piece that in 90-95% of the cases will be lost.
It gives a speed-up of 10% in depth and in selfplay it gains about 10 elo.

I went through 4 source codes of top engines and none of them has such a LMR condition.
90% of coding is debugging, the other 10% is writing bugs.
User avatar
hgm
Posts: 27795
Joined: Fri Mar 10, 2006 10:06 am
Location: Amsterdam
Full name: H G Muller

Re: LMR - lack of Alpha-Eval check?

Post by hgm »

How can it be that the highest hanging piece is lost so often? Standing pat in QS is entirely based on the idea that having the move will be able to solve any problems. And this also seems intuitively clear; there are so many ways to rescue a hanging piece: move it away, capture the attacker, interpose something, protect it, make a fat capture yourself to stirr things up, make a counter-threat... It seems very strange that none of that would work 9% of the time. Normally you only lose material when two of your pieces are hanging. But then you would save the most valuable, and lose the least valuable.
Dann Corbit
Posts: 12540
Joined: Wed Mar 08, 2006 8:57 pm
Location: Redmond, WA USA

Re: LMR - lack of Alpha-Eval check?

Post by Dann Corbit »

Maybe the real benefit is that the algorithm forces you to defend your pieces so that you have fewer hanging pieces on the board.
I can imagine this algorithm forcing you to defend the biggest hanging piece one at a time until all your hanging pieces are guarded.
Taking ideas is not a vice, it is a virtue. We have another word for this. It is called learning.
But sharing ideas is an even greater virtue. We have another word for this. It is called teaching.
chrisw
Posts: 4317
Joined: Tue Apr 03, 2012 4:28 pm

Re: LMR - lack of Alpha-Eval check?

Post by chrisw »

Rebel wrote: Sun Aug 23, 2020 4:01 pm In my LMR code I have:

Code: Select all

if (eval_score > alpha) reduce less
When I remove the code I get a speed-up of 20-25% in depth but in selfplay it loses 5-6 elo.

Now I have changed the code into:

Code: Select all

if (eval_score - SEE > alpha) reduce less  // SEE is the highest hanging piece that in 90-95% of the cases will be lost.
It gives a speed-up of 10% in depth and in selfplay it gains about 10 elo.

I went through 4 source codes of top engines and none of them has such a LMR condition.
When I last looked neither SF nor the top engines with evaluations in SF likeness does an eval test for pieces left hanging, so that SEE information is not so readily available without a little computation first.

As a very quick test in mine, I just substituted value_pawn for your SEE, result was about -20 Elo

If time allows, will try again with SEE = value(largest non-pawn piece attacked). It ought to work, but it is a break from SF paradigm which mostly uses historical statistics for reduction/extension rather than static_eval compares to alpha.
User avatar
hgm
Posts: 27795
Joined: Fri Mar 10, 2006 10:06 am
Location: Amsterdam
Full name: H G Muller

Re: LMR - lack of Alpha-Eval check?

Post by hgm »

OK, I am a bit slow. You do this per move. And if a piece is hanging, most moves will do nothing about that.

But when a move does something about it, you often can know that. E.g. for moves with the hanging piece you don't expect to lose that piece anymore. Interposition should also be easy to recognize, and capturing of the attacker would not be subject to this reduction in the first place, because it is a capture. Only protection is hard to recognize. (But it doesn't always help to improve the SEE.)

This is still one of the things I would like to try, one of these days: improve NEG to recognize all defenses against simple tactics, rather than only moving the threatened piece away, with a static algorithm. And then use that as a poor-man'sversion of the policy head of a NN in an alpha-beta search, to decide which moves should be reduced and which not.
User avatar
Rebel
Posts: 6991
Joined: Thu Aug 18, 2011 12:04 pm

Re: LMR - lack of Alpha-Eval check?

Post by Rebel »

chrisw wrote: Mon Aug 24, 2020 2:14 pm
Rebel wrote: Sun Aug 23, 2020 4:01 pm In my LMR code I have:

Code: Select all

if (eval_score > alpha) reduce less
When I remove the code I get a speed-up of 20-25% in depth but in selfplay it loses 5-6 elo.

Now I have changed the code into:

Code: Select all

if (eval_score - SEE > alpha) reduce less  // SEE is the highest hanging piece that in 90-95% of the cases will be lost.
It gives a speed-up of 10% in depth and in selfplay it gains about 10 elo.

I went through 4 source codes of top engines and none of them has such a LMR condition.
When I last looked neither SF nor the top engines with evaluations in SF likeness does an eval test for pieces left hanging, so that SEE information is not so readily available without a little computation first.

As a very quick test in mine, I just substituted value_pawn for your SEE, result was about -20 Elo

If time allows, will try again with SEE = value(largest non-pawn piece attacked). It ought to work, but it is a break from SF paradigm which mostly uses historical statistics for reduction/extension rather than static_eval compares to alpha.
After testing at longer time control the elo gain vanished, so I was a bit early to post.

Anyway, what still stands is that without the "if (eval_score > alpha) reduce less" code I get a regression.

I am currently looking for other ways to deal with this instruction because it is so costly, a loss of 20-25% in depth.
90% of coding is debugging, the other 10% is writing bugs.
User avatar
hgm
Posts: 27795
Joined: Fri Mar 10, 2006 10:06 am
Location: Amsterdam
Full name: H G Muller

Re: LMR - lack of Alpha-Eval check?

Post by hgm »

Loss of depth can also be good. Searching one line too deep goes at the expense of all the others, and could make you miss important tactics.
chrisw
Posts: 4317
Joined: Tue Apr 03, 2012 4:28 pm

Re: LMR - lack of Alpha-Eval check?

Post by chrisw »

Rebel wrote: Tue Aug 25, 2020 9:06 am
chrisw wrote: Mon Aug 24, 2020 2:14 pm
Rebel wrote: Sun Aug 23, 2020 4:01 pm In my LMR code I have:

Code: Select all

if (eval_score > alpha) reduce less
When I remove the code I get a speed-up of 20-25% in depth but in selfplay it loses 5-6 elo.

Now I have changed the code into:

Code: Select all

if (eval_score - SEE > alpha) reduce less  // SEE is the highest hanging piece that in 90-95% of the cases will be lost.
It gives a speed-up of 10% in depth and in selfplay it gains about 10 elo.

I went through 4 source codes of top engines and none of them has such a LMR condition.
When I last looked neither SF nor the top engines with evaluations in SF likeness does an eval test for pieces left hanging, so that SEE information is not so readily available without a little computation first.

As a very quick test in mine, I just substituted value_pawn for your SEE, result was about -20 Elo

If time allows, will try again with SEE = value(largest non-pawn piece attacked). It ought to work, but it is a break from SF paradigm which mostly uses historical statistics for reduction/extension rather than static_eval compares to alpha.
After testing at longer time control the elo gain vanished, so I was a bit early to post.

Anyway, what still stands is that without the "if (eval_score > alpha) reduce less" code I get a regression.

I am currently looking for other ways to deal with this instruction because it is so costly, a loss of 20-25% in depth.
That would figure because at longer time controls the statistical base will have built up more and the (one ply) static eval based pruning decisions at lower plies will be relatively inferior to statistic based. The idea of using static eval as prune parameter gets progressively less efficient with depth. Maybe somehow try to limit the SEE, alpha test not just to depth>6 but also to depth<N where N=maybe 10 or so?