How to get futility pruning to work?

Discussion of chess software programming and technical issues.

Moderators: hgm, Rebel, chrisw

micron
Posts: 155
Joined: Mon Feb 15, 2010 9:33 am
Location: New Zealand

Re: How to get futility pruning to work?

Post by micron »

diep wrote:Futility pruning is highly risky. If you test it well it will for most programs not work at decent time controls of course as compared to other forward pruning methods which prune more and better.
Certainly I found it a difficult and unrewarding feature to implement.
First there's the speed hit (and consequent Elo loss) from deciding whether each move is prunable.
Next, there's a loss of Elo due to pruning moves that would have given a cutoff. Thanks to Don for pointing out this loss can be quantified by fixed-depth matches.
Those losses have to be more than compensated by the improvement in average time-to-depth.

Also, it's not easy to tune the futility margins, partly because of the shape of the curve, but chiefly because the peak is Not Very Many Elo High.

Code: Select all

Elo       + +
        +    +
0  - - + - - - + - - - - - - - -
      +         + + + + + + + + +
     +
    +
       FUTILITY_MARGIN
diep wrote:For futility pruning basically you can easily prove that one needs really lots of chessknowledge in the futlity decision
Yes, indeed. Really intelligent pruning would allow the margins to be set lower, thus improving time-to-depth. There's a limit on what you can do; preventing, for instance, forks from being pruned would be nice but too complicated and time-consuming.

I have two bitboards (myAttacksTo and oppAttacksTo) with which some pruning decisions can be made a little more intelligent. Even so, futility currently gives me no more than 20 Elo.
diep
Posts: 1822
Joined: Thu Mar 09, 2006 11:54 pm
Location: The Netherlands

Re: How to get futility pruning to work?

Post by diep »

micron wrote:
diep wrote:Futility pruning is highly risky. If you test it well it will for most programs not work at decent time controls of course as compared to other forward pruning methods which prune more and better.
Certainly I found it a difficult and unrewarding feature to implement.
First there's the speed hit (and consequent Elo loss) from deciding whether each move is prunable.
Next, there's a loss of Elo due to pruning moves that would have given a cutoff. Thanks to Don for pointing out this loss can be quantified by fixed-depth matches.
Those losses have to be more than compensated by the improvement in average time-to-depth.

Also, it's not easy to tune the futility margins, partly because of the shape of the curve, but chiefly because the peak is Not Very Many Elo High.

Code: Select all

Elo       + +
        +    +
0  - - + - - - + - - - - - - - -
      +         + + + + + + + + +
     +
    +
       FUTILITY_MARGIN
diep wrote:For futility pruning basically you can easily prove that one needs really lots of chessknowledge in the futlity decision
Yes, indeed. Really intelligent pruning would allow the margins to be set lower, thus improving time-to-depth. There's a limit on what you can do; preventing, for instance, forks from being pruned would be nice but too complicated and time-consuming.

I have two bitboards (myAttacksTo and oppAttacksTo) with which some pruning decisions can be made a little more intelligent. Even so, futility currently gives me no more than 20 Elo.
You should initially not focus upon techniques like futility. First of all futility is dead old. Somewhere years 70 and 80s, just like razoring. Just its name isn't old.

An important problem seems to be having a well tuned evaluation function near the leaves.

If the tuning of the evaluation function is far superior to the chessknowledge represented, then you can of course take more risks; on other hand if your evaluation function has more knowledge than the tuning supports, the correction last few plies is far more important.

That said i do believe that last few plies it should be possible on paper to bigtime reduce the search tree.

Please note that nullmove focuses upon nodes >= beta, which is a far stronger assumption than skipping moves that are just a few tenths of a pawn to be expected below alpha.

With a huge evaluation function you simply cannot predict this to be the case prior to evaluating the position. Now in the Rybka(and derivates like houdini)/Stockfish type engines, there simply won't be huge bonuses around the corner, as it simply has hardly knowledge. So it's easier to get to work such techniques then.

Yet realize that basically search is getting used more as a 'mainline checking instrument' for them; that implies passive play of course, as you need knowledge to know how to attack and without the required knowledge you easily mistake there, which is the reason why Houdini is 'outdoing' its own more agressive tuned predecessor Rybka.

This is all so tuned to each other, not deliberate, but by means of testing, that you shouldn't fall in that trap there to assume that what works for them works for you. It's different engine to engine and all can be explained.

Please realize very careful that search is having all engines play more accurate when you search deeper, but if you look careful it doesn't give as much elo per ply to the beancounter engines, for the reason they don't have much knowledge inside.

An engine like Diep when a tad more bugs get out scales elowise much better in elo per won ply. Yet winning plies works in a different manner from what works for the above engines.

I'll get it to work when i do some effort for it :)

Yet futility isn't gonna work of course and anything similar to it is not what i intend to experiment with. One of the things i do want to experiment with more is to get a cheaper cutoff if i'm above beta a lot and we have a few plies to go. Say under or equal to a ply or 5.

The safe thing is doing a nullmove there, which in case of depth <= 4 for Diep results in calling qsearch for the other guy there.
Yet that loses on average 7 nodes to Diep, whereas 90%+ of the positions where we nullmove there the evaluation is over 3 pawns above beta.

Still a lot cheaper than a full search, yet the above engines solve it by hard razoring at 1 pawn above beta - cheapest thing possible - they simply do not have the system time to do anything more fancy there and this works well for them - yet that doesn't work for Diep so well - so i have to do a tad more there, but also will pick up more tactics then than the cheap razoring engines of course; it does give a total different search tree though.

Right now all what i tried there (especially experiments from 2005, where i put in again a lot of work to get this to work and failed) failed so far.
diep
Posts: 1822
Joined: Thu Mar 09, 2006 11:54 pm
Location: The Netherlands

Re: How to get futility pruning to work?

Post by diep »

I read what i wrote and realize i should explain the argumentation if i say: "futility isn't gonna work of course".

My argumentation is that more is possible when you're above beta. Now nullmove covers of course that to most extend quite well.

Say we're 5 ply depthleft and top of searchroutine.

Now futility is a pretty risky thing to do there, whereas being up a pawn or 3 in 90% of cases, one can imagine it should be possible to get a cheap cutoff there somehow, much cheaper than i get right now in Diep.

I can expand upon why futility is that risky if your evaluation is having many chess patterns which can easily give a bonus of 50 pawns or so based upon patterns.

Just look at the search error you introduce with it.

Let's say we careful do futility only last ply and also use reductions in the search; note i used reductions in Diep already in 1999, and i know DeepSjeng used them massively as well not much later.

quoting GCP : "reductions are the most obvious thing to try".

Now i'm using R=3 of course, So if we're at depthleft 5 ply,
then 5 - 3 -1 = 1 ply depthleft.

Yet in reality we were at 7 ply. Side 'side' is going to threat something positional at 7 ply depthleft.

Of course our reductions miss that totally and we get xside to move at
depthleft 5.

So at depthleft 5 ply i would introduce a huge search error giving a cutoff in a position based upon the opponent not being allowed to positional refute something here, which backtracks directly, based upon a 1 ply refutation that isn't getting carried out, to 7 ply depthleft.

All that because of a failure to search an additional say 37 moves at a 1 ply depthleft position.

Right now that error is there at 4 ply depthleft, which already is actually quite a big risk and many consider *that* already to be too much of a risk.

Realize that the question is whether you want to take the decision to add that search error into your search at depthleft 5 ply

So fine tuning that would require chessknowledge to be inserted into the futility routine. Just a simple

evaluation + margin , that ain't gonna work simply for Diep; i already tried way more sophisticated things than that, which also failed.

So if you test a bunch of games and futility isn't working for you, probably there is more to win doing other things better, than the huge effort required getting futility to work right now.

Note that the above scenario i gave, trying to 'fix' that with a rule to not use futility last ply when we just nullmoved, i tried that as well of course , in all sorts of variations and additional 'fixes' - didn't work either for Diep.

Make sure however you retest it once your engine improved a lot - old invented tricks sometimes work some 10 years later :)
diep
Posts: 1822
Joined: Thu Mar 09, 2006 11:54 pm
Location: The Netherlands

Re: How to get futility pruning to work?

Post by diep »

micron wrote:
diep wrote:Futility pruning is highly risky. If you test it well it will for most programs not work at decent time controls of course as compared to other forward pruning methods which prune more and better.
Certainly I found it a difficult and unrewarding feature to implement.
First there's the speed hit (and consequent Elo loss) from deciding whether each move is prunable.
Next, there's a loss of Elo due to pruning moves that would have given a cutoff. Thanks to Don for pointing out this loss can be quantified by fixed-depth matches.
Those losses have to be more than compensated by the improvement in average time-to-depth.

Also, it's not easy to tune the futility margins, partly because of the shape of the curve, but chiefly because the peak is Not Very Many Elo High.

Code: Select all

Elo       + +
        +    +
0  - - + - - - + - - - - - - - -
      +         + + + + + + + + +
     +
    +
       FUTILITY_MARGIN
diep wrote:For futility pruning basically you can easily prove that one needs really lots of chessknowledge in the futlity decision
Yes, indeed. Really intelligent pruning would allow the margins to be set lower, thus improving time-to-depth. There's a limit on what you can do; preventing, for instance, forks from being pruned would be nice but too complicated and time-consuming.

I have two bitboards (myAttacksTo and oppAttacksTo) with which some pruning decisions can be made a little more intelligent. Even so, futility currently gives me no more than 20 Elo.
note that +20 elo is a lot. For diep it's around -150 elo if i use futlility last 3 ply.

Vincent