Adjustable search pruning depending on time control

Discussion of chess software programming and technical issues.

Moderators: hgm, Rebel, chrisw

lkaufman
Posts: 5960
Joined: Sun Jan 10, 2010 6:15 am
Location: Maryland USA

Re: Adjustable search pruning depending on time control

Post by lkaufman »

Pio wrote:Hi Larry!

I think there is some logic in having the reductions depend on depth and that is due to alpha-beta. There is less reason looking at bad looking moves close to the leaves since they will then only have few moves to improve their scores relative to their sibblings. If the moves cannot improve the score there is no point evaluating them.

I think reducing moves more closer to the leaves will work as a complement to lazy evaluation.

I might have understood everything completely wrong.
So you are saying there is more reason to look at "bad moves" at deep levels, which means less LMR at deep levels. This is the opposite of what stockfish does, and I don't know any program that does it this way. Of course everyone could be wrong, but we had no evidence that this sort of thing helps in Komodo. We are still left with the mystery of why all the Ippo cousins reduce more at higher depths when in check but not in the far more usual case of not in check.
lkaufman
Posts: 5960
Joined: Sun Jan 10, 2010 6:15 am
Location: Maryland USA

Re: Adjustable search pruning depending on time control

Post by lkaufman »

Eelco de Groot wrote:No I have not tested this against other possible formulas. Other than if the new rule is clearly inferior, I think I will find out after a while. (And as I said, I think the exceptions to the reduction, are as least as important as the rule itself. Also making the rule more dependant on depth meant the difference in reductions based on movecount is not changed, which I would find a lot more of a risk)

There are so many possible variations you could try, and if you have to test all of the 100s of variables one by one, what time is there left to try new ideas, different algorithms? It seems to me that expressing the reduction as a function of depth is just a way of getting a formulation expressing reduction as a percentage (of depth). That seems perfectly natural. Others will use a table, but you can tune both to get exactly the same results. This one is easier to modify and perhaps easier to tune. But I'm no mathematician and haven't tuned it, Marco and Joona would have a better explanation I'm sure.

Regards, Eelco
It does not seem natural at all to me to express reduction as a percentage of depth. If we assume a branching factor of 2, a reduction of four plies costs only 6% of a normal search (assuming no research needed). There is not that much more to gain by reducing further, while the chance of missing a good move will go up significantly with each extra reduction. Still, I can see an argument for some mild increase in reduction with depth, I just wonder why no one except perhaps Stockfish has been able to demonstrate that this helps the Elo.
BubbaTough
Posts: 1154
Joined: Fri Jun 23, 2006 5:18 am

Re: Adjustable search pruning depending on time control

Post by BubbaTough »

lkaufman wrote:I just wonder why no one except perhaps Stockfish has been able to demonstrate that this helps the Elo.
Plenty of people have (including us). I consider it a very natural thought (whether it is logical or not is a different story) which I and many others were experimenting with long before it made it into stockfish. It has been popular not just adjusting LMR reductions, but also null move depths (which I think gets at essentially the same thing).

I believe your original query was limited to "top engines" which assumably translates into a very limited number of engines, all of which seem to influence each other quite a bit. If you are really asking the question "of Komodo, Stockfish, Critter, and Houdini, why is feature X only in Stockfish" its a very different question than "of all engines, why does feature X only work in Stockfish." The 2nd question would be an interesting puzzle in my mind. The 1st, well, for any given feature this included the generic answer is probably just the others have not managed to copy and tune it successfully in their engines yet. Its not easy to cut and paste code between 2 engines and gain elo unless the engines are VERY similar. I don't find it shocking that it doesn't work in this case either.

-Sam

p.s. Merry Christmas :)
lkaufman
Posts: 5960
Joined: Sun Jan 10, 2010 6:15 am
Location: Maryland USA

Re: Adjustable search pruning depending on time control

Post by lkaufman »

BubbaTough wrote:
lkaufman wrote:I just wonder why no one except perhaps Stockfish has been able to demonstrate that this helps the Elo.
Plenty of people have (including us). I consider it a very natural thought (whether it is logical or not is a different story) which I and many others were experimenting with long before it made it into stockfish. It has been popular not just adjusting LMR reductions, but also null move depths (which I think gets at essentially the same thing).

I believe your original query was limited to "top engines" which assumably translates into a very limited number of engines, all of which seem to influence each other quite a bit. If you are really asking the question "of Komodo, Stockfish, Critter, and Houdini, why is feature X only in Stockfish" its a very different question than "of all engines, why does feature X only work in Stockfish." The 2nd question would be an interesting puzzle in my mind. The 1st, well, for any given feature this included the generic answer is probably just the others have not managed to copy and tune it successfully in their engines yet. Its not easy to cut and paste code between 2 engines and gain elo unless the engines are VERY similar. I don't find it shocking that it doesn't work in this case either.

-Sam

p.s. Merry Christmas :)
The interesting thing is that all the top engines have null move reduction increasing with increasing depth, and in Komodo it was not hard to establish that this was helpful. Yet the same does not apply to LMR reduction, and in fact I rather doubt that it helps even in Stockfish. So they behave fundamentally differently. The question is: Why so?
By the way, I usually refer to top engines because I have pretty good knowledge about the details of all of them (at least some versions of all of them). But I would include the second tier of engines if I had information about them.

Happy holidays to all!
BubbaTough
Posts: 1154
Joined: Fri Jun 23, 2006 5:18 am

Re: Adjustable search pruning depending on time control

Post by BubbaTough »

lkaufman wrote:
BubbaTough wrote:
lkaufman wrote:I just wonder why no one except perhaps Stockfish has been able to demonstrate that this helps the Elo.
Plenty of people have (including us). I consider it a very natural thought (whether it is logical or not is a different story) which I and many others were experimenting with long before it made it into stockfish. It has been popular not just adjusting LMR reductions, but also null move depths (which I think gets at essentially the same thing).

I believe your original query was limited to "top engines" which assumably translates into a very limited number of engines, all of which seem to influence each other quite a bit. If you are really asking the question "of Komodo, Stockfish, Critter, and Houdini, why is feature X only in Stockfish" its a very different question than "of all engines, why does feature X only work in Stockfish." The 2nd question would be an interesting puzzle in my mind. The 1st, well, for any given feature this included the generic answer is probably just the others have not managed to copy and tune it successfully in their engines yet. Its not easy to cut and paste code between 2 engines and gain elo unless the engines are VERY similar. I don't find it shocking that it doesn't work in this case either.

-Sam

p.s. Merry Christmas :)
The interesting thing is that all the top engines have null move reduction increasing with increasing depth, and in Komodo it was not hard to establish that this was helpful. Yet the same does not apply to LMR reduction, and in fact I rather doubt that it helps even in Stockfish. So they behave fundamentally differently. The question is: Why so?
By the way, I usually refer to top engines because I have pretty good knowledge about the details of all of them (at least some versions of all of them). But I would include the second tier of engines if I had information about them.

Happy holidays to all!
Ahh, OK. I will share the philosophy I came up with doing LearningLemming. The idea is to design your design your reduction / pruning / null move stuff together (since you can only reduce so much total (null + LMR) before the engine starts missing important things). The more you reduce in LMR, the less you can do in null move (and vice-versa). If you optimize one (such as depth based null move reduction) and then the other serially then you only end up in some local optima. By varying both at the same time you can find an overall improved solution. So my guess is if you are finding depth based reduction work in null move but not LMR, then you probably optimized null move first and then are stuck in a local minima.

-Sam
lkaufman
Posts: 5960
Joined: Sun Jan 10, 2010 6:15 am
Location: Maryland USA

Re: Adjustable search pruning depending on time control

Post by lkaufman »

BubbaTough wrote:
lkaufman wrote:
BubbaTough wrote:
lkaufman wrote:I just wonder why no one except perhaps Stockfish has been able to demonstrate that this helps the Elo.
Plenty of people have (including us). I consider it a very natural thought (whether it is logical or not is a different story) which I and many others were experimenting with long before it made it into stockfish. It has been popular not just adjusting LMR reductions, but also null move depths (which I think gets at essentially the same thing).

I believe your original query was limited to "top engines" which assumably translates into a very limited number of engines, all of which seem to influence each other quite a bit. If you are really asking the question "of Komodo, Stockfish, Critter, and Houdini, why is feature X only in Stockfish" its a very different question than "of all engines, why does feature X only work in Stockfish." The 2nd question would be an interesting puzzle in my mind. The 1st, well, for any given feature this included the generic answer is probably just the others have not managed to copy and tune it successfully in their engines yet. Its not easy to cut and paste code between 2 engines and gain elo unless the engines are VERY similar. I don't find it shocking that it doesn't work in this case either.

-Sam

p.s. Merry Christmas :)
The interesting thing is that all the top engines have null move reduction increasing with increasing depth, and in Komodo it was not hard to establish that this was helpful. Yet the same does not apply to LMR reduction, and in fact I rather doubt that it helps even in Stockfish. So they behave fundamentally differently. The question is: Why so?
By the way, I usually refer to top engines because I have pretty good knowledge about the details of all of them (at least some versions of all of them). But I would include the second tier of engines if I had information about them.

Happy holidays to all!
Ahh, OK. I will share the philosophy I came up with doing LearningLemming. The idea is to design your design your reduction / pruning / null move stuff together (since you can only reduce so much total (null + LMR) before the engine starts missing important things). The more you reduce in LMR, the less you can do in null move (and vice-versa). If you optimize one (such as depth based null move reduction) and then the other serially then you only end up in some local optima. By varying both at the same time you can find an overall improved solution. So my guess is if you are finding depth based reduction work in null move but not LMR, then you probably optimized null move first and then are stuck in a local minima.

-Sam
That is an interesting idea, but I wonder whether you have any actual evidence that optimal LMR reduction and optimal null move reduction are inversely correlated to a significant degree? I can understand your argument for it, but my own feeling is that the two are sufficiently different so that any such (inverse) correlation will be fairly minimal. I've never tested this idea though. I could try by limiting null reduction to a flat 3 plies and then trying more aggressive LMR. Maybe I'll do so.
BubbaTough
Posts: 1154
Joined: Fri Jun 23, 2006 5:18 am

Re: Adjustable search pruning depending on time control

Post by BubbaTough »

lkaufman wrote:
BubbaTough wrote:
lkaufman wrote:
BubbaTough wrote:
lkaufman wrote:I just wonder why no one except perhaps Stockfish has been able to demonstrate that this helps the Elo.
Plenty of people have (including us). I consider it a very natural thought (whether it is logical or not is a different story) which I and many others were experimenting with long before it made it into stockfish. It has been popular not just adjusting LMR reductions, but also null move depths (which I think gets at essentially the same thing).

I believe your original query was limited to "top engines" which assumably translates into a very limited number of engines, all of which seem to influence each other quite a bit. If you are really asking the question "of Komodo, Stockfish, Critter, and Houdini, why is feature X only in Stockfish" its a very different question than "of all engines, why does feature X only work in Stockfish." The 2nd question would be an interesting puzzle in my mind. The 1st, well, for any given feature this included the generic answer is probably just the others have not managed to copy and tune it successfully in their engines yet. Its not easy to cut and paste code between 2 engines and gain elo unless the engines are VERY similar. I don't find it shocking that it doesn't work in this case either.

-Sam

p.s. Merry Christmas :)
The interesting thing is that all the top engines have null move reduction increasing with increasing depth, and in Komodo it was not hard to establish that this was helpful. Yet the same does not apply to LMR reduction, and in fact I rather doubt that it helps even in Stockfish. So they behave fundamentally differently. The question is: Why so?
By the way, I usually refer to top engines because I have pretty good knowledge about the details of all of them (at least some versions of all of them). But I would include the second tier of engines if I had information about them.

Happy holidays to all!
Ahh, OK. I will share the philosophy I came up with doing LearningLemming. The idea is to design your design your reduction / pruning / null move stuff together (since you can only reduce so much total (null + LMR) before the engine starts missing important things). The more you reduce in LMR, the less you can do in null move (and vice-versa). If you optimize one (such as depth based null move reduction) and then the other serially then you only end up in some local optima. By varying both at the same time you can find an overall improved solution. So my guess is if you are finding depth based reduction work in null move but not LMR, then you probably optimized null move first and then are stuck in a local minima.

-Sam
That is an interesting idea, but I wonder whether you have any actual evidence that optimal LMR reduction and optimal null move reduction are inversely correlated to a significant degree? I can understand your argument for it, but my own feeling is that the two are sufficiently different so that any such (inverse) correlation will be fairly minimal. I've never tested this idea though. I could try by limiting null reduction to a flat 3 plies and then trying more aggressive LMR. Maybe I'll do so.
My only evidence is my own anecdotal experiences. I have not tried to study it systematically, but I have noticed simultaneous changes have been easier for me than changing one at a time. The same is true for related eval terms, move order + LMR, and a number of other conceptually linked terms.

-Sam
lkaufman
Posts: 5960
Joined: Sun Jan 10, 2010 6:15 am
Location: Maryland USA

Re: Adjustable search pruning depending on time control

Post by lkaufman »

BubbaTough wrote:
lkaufman wrote:
BubbaTough wrote:
lkaufman wrote:
BubbaTough wrote:
lkaufman wrote:I just wonder why no one except perhaps Stockfish has been able to demonstrate that this helps the Elo.
Plenty of people have (including us). I consider it a very natural thought (whether it is logical or not is a different story) which I and many others were experimenting with long before it made it into stockfish. It has been popular not just adjusting LMR reductions, but also null move depths (which I think gets at essentially the same thing).

I believe your original query was limited to "top engines" which assumably translates into a very limited number of engines, all of which seem to influence each other quite a bit. If you are really asking the question "of Komodo, Stockfish, Critter, and Houdini, why is feature X only in Stockfish" its a very different question than "of all engines, why does feature X only work in Stockfish." The 2nd question would be an interesting puzzle in my mind. The 1st, well, for any given feature this included the generic answer is probably just the others have not managed to copy and tune it successfully in their engines yet. Its not easy to cut and paste code between 2 engines and gain elo unless the engines are VERY similar. I don't find it shocking that it doesn't work in this case either.

-Sam

p.s. Merry Christmas :)
The interesting thing is that all the top engines have null move reduction increasing with increasing depth, and in Komodo it was not hard to establish that this was helpful. Yet the same does not apply to LMR reduction, and in fact I rather doubt that it helps even in Stockfish. So they behave fundamentally differently. The question is: Why so?
By the way, I usually refer to top engines because I have pretty good knowledge about the details of all of them (at least some versions of all of them). But I would include the second tier of engines if I had information about them.

Happy holidays to all!
Ahh, OK. I will share the philosophy I came up with doing LearningLemming. The idea is to design your design your reduction / pruning / null move stuff together (since you can only reduce so much total (null + LMR) before the engine starts missing important things). The more you reduce in LMR, the less you can do in null move (and vice-versa). If you optimize one (such as depth based null move reduction) and then the other serially then you only end up in some local optima. By varying both at the same time you can find an overall improved solution. So my guess is if you are finding depth based reduction work in null move but not LMR, then you probably optimized null move first and then are stuck in a local minima.

-Sam
That is an interesting idea, but I wonder whether you have any actual evidence that optimal LMR reduction and optimal null move reduction are inversely correlated to a significant degree? I can understand your argument for it, but my own feeling is that the two are sufficiently different so that any such (inverse) correlation will be fairly minimal. I've never tested this idea though. I could try by limiting null reduction to a flat 3 plies and then trying more aggressive LMR. Maybe I'll do so.
My only evidence is my own anecdotal experiences. I have not tried to study it systematically, but I have noticed simultaneous changes have been easier for me than changing one at a time. The same is true for related eval terms, move order + LMR, and a number of other conceptually linked terms.

-Sam
This is certainly true for related eval terms. It all comes down to how closely the terms in question correlate. Regarding move order + LMR, do you just mean that better move order permits more aggressive LMR (obviously true), or are you saying that different move orders of equal merit might require substantially different LMR?
BubbaTough
Posts: 1154
Joined: Fri Jun 23, 2006 5:18 am

Re: Adjustable search pruning depending on time control

Post by BubbaTough »

lkaufman wrote:
BubbaTough wrote:
lkaufman wrote:
BubbaTough wrote:
lkaufman wrote:
BubbaTough wrote:
lkaufman wrote:I just wonder why no one except perhaps Stockfish has been able to demonstrate that this helps the Elo.
Plenty of people have (including us). I consider it a very natural thought (whether it is logical or not is a different story) which I and many others were experimenting with long before it made it into stockfish. It has been popular not just adjusting LMR reductions, but also null move depths (which I think gets at essentially the same thing).

I believe your original query was limited to "top engines" which assumably translates into a very limited number of engines, all of which seem to influence each other quite a bit. If you are really asking the question "of Komodo, Stockfish, Critter, and Houdini, why is feature X only in Stockfish" its a very different question than "of all engines, why does feature X only work in Stockfish." The 2nd question would be an interesting puzzle in my mind. The 1st, well, for any given feature this included the generic answer is probably just the others have not managed to copy and tune it successfully in their engines yet. Its not easy to cut and paste code between 2 engines and gain elo unless the engines are VERY similar. I don't find it shocking that it doesn't work in this case either.

-Sam

p.s. Merry Christmas :)
The interesting thing is that all the top engines have null move reduction increasing with increasing depth, and in Komodo it was not hard to establish that this was helpful. Yet the same does not apply to LMR reduction, and in fact I rather doubt that it helps even in Stockfish. So they behave fundamentally differently. The question is: Why so?
By the way, I usually refer to top engines because I have pretty good knowledge about the details of all of them (at least some versions of all of them). But I would include the second tier of engines if I had information about them.

Happy holidays to all!
Ahh, OK. I will share the philosophy I came up with doing LearningLemming. The idea is to design your design your reduction / pruning / null move stuff together (since you can only reduce so much total (null + LMR) before the engine starts missing important things). The more you reduce in LMR, the less you can do in null move (and vice-versa). If you optimize one (such as depth based null move reduction) and then the other serially then you only end up in some local optima. By varying both at the same time you can find an overall improved solution. So my guess is if you are finding depth based reduction work in null move but not LMR, then you probably optimized null move first and then are stuck in a local minima.

-Sam
That is an interesting idea, but I wonder whether you have any actual evidence that optimal LMR reduction and optimal null move reduction are inversely correlated to a significant degree? I can understand your argument for it, but my own feeling is that the two are sufficiently different so that any such (inverse) correlation will be fairly minimal. I've never tested this idea though. I could try by limiting null reduction to a flat 3 plies and then trying more aggressive LMR. Maybe I'll do so.
My only evidence is my own anecdotal experiences. I have not tried to study it systematically, but I have noticed simultaneous changes have been easier for me than changing one at a time. The same is true for related eval terms, move order + LMR, and a number of other conceptually linked terms.

-Sam
This is certainly true for related eval terms. It all comes down to how closely the terms in question correlate. Regarding move order + LMR, do you just mean that better move order permits more aggressive LMR (obviously true), or are you saying that different move orders of equal merit might require substantially different LMR?
All of the above. Better move order = more aggressive pruning is the most important, but there are other things to toy with of course. For example: if you do things like put passed pawns or checks early, then it affects how you can treat them in LMR. We are not currently doing that, but I have played with it in the past.

-Sam
lkaufman
Posts: 5960
Joined: Sun Jan 10, 2010 6:15 am
Location: Maryland USA

Re: Adjustable search pruning depending on time control

Post by lkaufman »

I am currently checking out your idea that LMR and Null reductions are substantially interdependent, and so far the results confirm your claim. If this holds up, it will open a whole new area for research and future improvements to Komodo. Thanks for the suggestion!