verified null move

Discussion of chess software programming and technical issues.

Moderators: hgm, Rebel, chrisw

jwes
Posts: 778
Joined: Sat Jul 01, 2006 7:11 am

Re: verified null move

Post by jwes »

bob wrote:In a private email discussion, someone asked if I had tested verified null-move on my cluster. I had not, because the idea made no sense to me and seemed to be nothing more than wasted search effort. But, I thought I would give it a whirl to see what happens.

The idea is that when a null-move search fails high, you do a normal search before you allow the NM fail high to terminate the search. This normal search is to a much-reduced depth. If the normal search fails low on the beta-1, beta bounds, then the null-move search fail high is ignored.

This can be limited in two ways.

1. How much do you reduce the depth on the verification search. I tried 3-4-5-6 plies. The bigger the number, the better the result, as this tends to reduce the extra effort expended.

2. Where do you do the verification searches. I limited it based on remaining depth so that it would not be done too near the tips to control effort expended. I tried this with remaining depth at least 3, then 4, ... and as before the bigger the value, the better the result.

Unfortunately, bigger values reduce the impact of the verification search, and when the depth limit (in 2 above) is large enough, a verification search is never done. The upper bound on this mess was the rating of the original program without the verification search. In other words, I could not get this idea to produce anything other than an Elo _loss_. Verified in my usual cluster-testing approach.
Have you tried using verified null move only in the circumstances where you do not use null move at all currently ?
bob
Posts: 20943
Joined: Mon Feb 27, 2006 7:30 pm
Location: Birmingham, AL

Re: verified null move

Post by bob »

jwes wrote:
bob wrote:In a private email discussion, someone asked if I had tested verified null-move on my cluster. I had not, because the idea made no sense to me and seemed to be nothing more than wasted search effort. But, I thought I would give it a whirl to see what happens.

The idea is that when a null-move search fails high, you do a normal search before you allow the NM fail high to terminate the search. This normal search is to a much-reduced depth. If the normal search fails low on the beta-1, beta bounds, then the null-move search fail high is ignored.

This can be limited in two ways.

1. How much do you reduce the depth on the verification search. I tried 3-4-5-6 plies. The bigger the number, the better the result, as this tends to reduce the extra effort expended.

2. Where do you do the verification searches. I limited it based on remaining depth so that it would not be done too near the tips to control effort expended. I tried this with remaining depth at least 3, then 4, ... and as before the bigger the value, the better the result.

Unfortunately, bigger values reduce the impact of the verification search, and when the depth limit (in 2 above) is large enough, a verification search is never done. The upper bound on this mess was the rating of the original program without the verification search. In other words, I could not get this idea to produce anything other than an Elo _loss_. Verified in my usual cluster-testing approach.
Have you tried using verified null move only in the circumstances where you do not use null move at all currently ?
No, because I am not aware of anyone that does null at all with zero pieces, and I use it everywhere else already.
bob
Posts: 20943
Joined: Mon Feb 27, 2006 7:30 pm
Location: Birmingham, AL

Re: verified null move

Post by bob »

After running the test as indicated, this is one of those changes that is at best, worth nothing, or drops a couple of Elo.
BubbaTough
Posts: 1154
Joined: Fri Jun 23, 2006 5:18 am

Re: verified null move

Post by BubbaTough »

This is something I spent a fair amount of time on a couple years ago. I have an odd null move (its depth adjusts based on many factors) and odd verification search, so my conclusions on this topic (as with most topics) may not be relevant. In my process of arriving at my null move / verification combo, I tried many many combinations of different null move / verification schemes. My testing was not Bobeske in its quality, but was quite thorough by any other standard. Since I have spent so much time on this, perhaps more than any other issue other than extensions/reductions/pruning, I should probably share my conclusions:

1. Most implementations of verification search hurt results slightly.
2. For most implementations of null move, it is possible with a lot of work to construct a set of circumstances for verification search where the net result is break-even, or in some cases a slight improvement.
3. The program plays more "aesthetic" with verification than without. That is, it avoids "obvious" errors in exchange for very slightly worse play at other times.
4. Verification is more important against humans than computer vs. computer, since it becomes relevant even with quite a few pieces on the board during "anti-computer" chess games that contain large locked pawn structures. Thus, almost all testing in this community underestimates the value of verification search as it applies to human vs. computer games.

Anyway, if you believe in this analysis, I think the action to take is based on which of the following camps you are in:

1. Keep program simple as possible. Only add complexity if it clearly strengthens the program.
2. Keep playing style as aesthetic as possible. Add any feature that makes the games more attractive if the do not clearly weaken the program.

I am in camp 2., so I use it.

-Sam
User avatar
Eelco de Groot
Posts: 4565
Joined: Sun Mar 12, 2006 2:40 am
Full name:   

Re: verified null move

Post by Eelco de Groot »

Thanks for running the tests Bob, it is alway good to get some hard data.

The result seems to be what my more or less my impression was too, that verification searches may not help in elo but that it does not seem to hurt much either. The verification search is initially meant to prevent Zugzwang problems, so more cosmetic in nature I think?

It is maybe possible to do other variations but the variant I have tried is not really tested for elo. For instance you can do a very short verification search before nullmove, and if both it and nullmove are higher than beta only then do a longer verfication search. That is both a form of IID and a test for Zugzwang before doing nullmove I think. Also it is maybe possible to vary R based on the short verification search.

If you could get more answers from fail highs in standard nullwindow search than just whether result is equal or larger than beta you could use the same short verification before nullmove as a form of Futility pruning.
I must admit I have not really checked if my code for this ever gives good results in games, but for instance in Glaurung clone Ancalagon the code that is active at the moment does, or is supposed to be able to do some heavy futility pruning this way with the verification search on deep searches, even without testing what nullmove would return, at least that was my theory. But as I said I have not verified if it really helps or even yet if the code as it is now can actually work, other than in position tests.

Glaurung and Stockfish 1.3 code with some changes, probably not all sound as explained later:

Code: Select all


  // search() is the search function for zero-width nodes.

  Value search(Position &pos, SearchStack ss[], Value beta, Depth depth,
               int ply, bool allowNullmove, int threadID) {

    assert&#40;beta >= -VALUE_INFINITE && beta <= VALUE_INFINITE&#41;;
    assert&#40;ply >= 0 && ply < PLY_MAX&#41;;
    assert&#40;threadID >= 0 && threadID < ActiveThreads&#41;;

    if &#40;depth < OnePly&#41;
        return qsearch&#40;pos, ss, beta-1, beta, Depth&#40;0&#41;, ply, threadID&#41;;

    // Initialize, and make an early exit in case of an aborted search,
    // an instant draw, maximum ply reached, etc.
    init_node&#40;ss, ply, threadID&#41;;

    // After init_node&#40;) that calls poll&#40;)
    if &#40;AbortSearch || thread_should_stop&#40;threadID&#41;)
        return Value&#40;0&#41;;

    if &#40;pos.is_draw&#40;))
        return VALUE_DRAW;

    EvalInfo ei;

    if &#40;ply >= PLY_MAX - 1&#41;
        return evaluate&#40;pos, ei, threadID&#41;;

    // Mate distance pruning
    if &#40;value_mated_in&#40;ply&#41; >= beta&#41;
        return beta;

    if &#40;value_mate_in&#40;ply + 1&#41; < beta&#41;
        return beta - 1;

	Value approximateEval;

    // Transposition table lookup
    const TTEntry* tte = TT.retrieve&#40;pos&#41;;
    Move ttMove = &#40;tte ? tte->move&#40;) &#58; MOVE_NONE&#41;;

    if &#40;tte && ok_to_use_TT&#40;tte, depth, beta, ply&#41;)
    &#123;
        ss&#91;ply&#93;.currentMove = ttMove; // can be MOVE_NONE
        return value_from_tt&#40;tte->value&#40;), ply&#41;;
    &#125;
	else
	&#123;
		if &#40;tte && tte->type&#40;) == VALUE_TYPE_EVAL&#41;
		    approximateEval = tte->value&#40;);
		else approximateEval = quick_evaluate&#40;pos&#41;;
	&#125;

	Value verificationsearchValue, nullValue;
   int R; // Null move reduction
   bool mateThreat = false;
   bool isCheck = pos.is_check&#40;);
	bool disableNull = !allowNullmove;
	bool perpetualExtension = false;

    // Null move search
    if (    allowNullmove
        &&  depth > OnePly
        && !isCheck
        && !value_is_mate&#40;beta&#41;
        &&  ok_to_do_nullmove&#40;pos&#41;
        &&  approximateEval >= beta - NullMoveMargin&#41;
    &#123;
        ss&#91;ply&#93;.currentMove = MOVE_NULL;

        StateInfo st;
        

		// Do an approximate search

		verificationsearchValue = search&#40;pos, ss, beta, depth-7*OnePly, ply, false, threadID&#41;;

		// If the approximate search was at high enough depth and fails high five pawns predict a beta cutoff

		if &#40;depth >= 15 * OnePly&#41;
		&#123;
			if &#40;verificationsearchValue >= beta + Value&#40;0x500&#41; && !value_is_mate&#40;verificationsearchValue&#41;)
			&#123;
				return beta;
			&#125;
		&#125;

		pos.do_null_move&#40;st&#41;;

		if &#40;depth >= 12 * OnePly&#41;
		&#123;
		    if &#40;verificationsearchValue >= beta + 0x100&#41;
			&#123;
			    R = 5;
		    &#125;
			else if &#40;verificationsearchValue <= beta - 0x100&#41;
			&#123;
				disableNull = true;
			&#125;
			else R = 4;
		&#125;
		else
		&#123;
			R = &#40;depth >= 5 * OnePly ? 4 &#58; 3&#41;;
		&#125; // Null move dynamic reduction

		if (!disableNull&#41;
			 nullValue = -search&#40;pos, ss, -&#40;beta-1&#41;, depth-R*OnePly, ply+1, true, threadID&#41;;
		else nullValue = verificationsearchValue;

        pos.undo_null_move&#40;);

        if &#40;value_is_mate&#40;nullValue&#41;)
        &#123;
            /* Do not return unproven mates */
        &#125;
        else if &#40;nullValue >= beta && verificationsearchValue >= beta&#41;
        &#123;
            if &#40;depth < 6 * OnePly&#41;
                return beta;

            // Do zugzwang verification search
            verificationsearchValue = search&#40;pos, ss, beta, depth-5*OnePly, ply, false, threadID&#41;;
            if &#40;verificationsearchValue >= beta&#41;
                return beta;
        &#125; else &#123;
            // The null move failed low, which means that we may be faced with
            // some kind of threat. If the previous move was reduced, check if
            // the move that refuted the null move was somehow connected to the
            // move which was reduced. If a connection is found, return a fail
            // low score &#40;which will cause the reduced move to fail high in the
            // parent node, which will trigger a re-search with full depth&#41;.
The rest of nullmove is I believe still largely the same as in Stockfish 1.3.
I must admit I suspect this is only half working because the part with

Code: Select all

// Do an approximate search

verificationsearchValue = search&#40;pos, ss, beta, depth-7*OnePly, ply, false, threadID&#41;;
will probably not generate results >= beta + Value(0x500)
I should change beta to beta + Value(0x500) for deep pruning to work but I also just would like to know if verificationsearchValue >= beta, because I'm also using that. Nullwindow seaches with a beta that is jumping around are probably not really what you want either. Because the deep pruning is only triggered if there is enough depth, this should not do anything in Blitz or in a short analysis. It is just an experimental thing for deep searches, any elo testing is only feasible at longer timecontrols. Not for using the verificationsearchValue >= beta in parallel with nullmove of course, that should be testable with your or other methods, only the deep pruning bit (brute pruning is the name in Blueberry Togas, there I believe I did use a higher beta to generate true beta + futility margin cutoffs)

Regards, Eelco
Debugging is twice as hard as writing the code in the first
place. Therefore, if you write the code as cleverly as possible, you
are, by definition, not smart enough to debug it.
-- Brian W. Kernighan
User avatar
Eelco de Groot
Posts: 4565
Joined: Sun Mar 12, 2006 2:40 am
Full name:   

Re: verified null move

Post by Eelco de Groot »

For easier comparison with the Analagon code, the Stockfish 1.3 code of search() starts as follows:

Code: Select all


  // search&#40;) is the search function for zero-width nodes.

  Value search&#40;Position &pos, SearchStack ss&#91;&#93;, Value beta, Depth depth,
               int ply, bool allowNullmove, int threadID&#41; &#123;

    assert&#40;beta >= -VALUE_INFINITE && beta <= VALUE_INFINITE&#41;;
    assert&#40;ply >= 0 && ply < PLY_MAX&#41;;
    assert&#40;threadID >= 0 && threadID < ActiveThreads&#41;;

    if &#40;depth < OnePly&#41;
        return qsearch&#40;pos, ss, beta-1, beta, Depth&#40;0&#41;, ply, threadID&#41;;

    // Initialize, and make an early exit in case of an aborted search,
    // an instant draw, maximum ply reached, etc.
    init_node&#40;ss, ply, threadID&#41;;

    // After init_node&#40;) that calls poll&#40;)
    if &#40;AbortSearch || thread_should_stop&#40;threadID&#41;)
        return Value&#40;0&#41;;

    if &#40;pos.is_draw&#40;))
        return VALUE_DRAW;

    EvalInfo ei;

    if &#40;ply >= PLY_MAX - 1&#41;
        return evaluate&#40;pos, ei, threadID&#41;;

    // Mate distance pruning
    if &#40;value_mated_in&#40;ply&#41; >= beta&#41;
        return beta;

    if &#40;value_mate_in&#40;ply + 1&#41; < beta&#41;
        return beta - 1;

    // Transposition table lookup
    const TTEntry* tte = TT.retrieve&#40;pos&#41;;
    Move ttMove = &#40;tte ? tte->move&#40;) &#58; MOVE_NONE&#41;;

    if &#40;tte && ok_to_use_TT&#40;tte, depth, beta, ply&#41;)
    &#123;
        ss&#91;ply&#93;.currentMove = ttMove; // can be MOVE_NONE
        return value_from_tt&#40;tte->value&#40;), ply&#41;;
    &#125;

    Value approximateEval = quick_evaluate&#40;pos&#41;;
    bool mateThreat = false;
    bool isCheck = pos.is_check&#40;);

    // Null move search
    if (    allowNullmove
        &&  depth > OnePly
        && !isCheck
        && !value_is_mate&#40;beta&#41;
        &&  ok_to_do_nullmove&#40;pos&#41;
        &&  approximateEval >= beta - NullMoveMargin&#41;
    &#123;
        ss&#91;ply&#93;.currentMove = MOVE_NULL;

        StateInfo st;
        pos.do_null_move&#40;st&#41;;
        int R = &#40;depth >= 5 * OnePly ? 4 &#58; 3&#41;; // Null move dynamic reduction

        Value nullValue = -search&#40;pos, ss, -&#40;beta-1&#41;, depth-R*OnePly, ply+1, false, threadID&#41;;

        pos.undo_null_move&#40;);

        if &#40;value_is_mate&#40;nullValue&#41;)
        &#123;
            /* Do not return unproven mates */
        &#125;
        else if &#40;nullValue >= beta&#41;
        &#123;
            if &#40;depth < 6 * OnePly&#41;
                return beta;

            // Do zugzwang verification search
            Value v = search&#40;pos, ss, beta, depth-5*OnePly, ply, false, threadID&#41;;
            if &#40;v >= beta&#41;
                return beta;
        &#125; else &#123;
            // The null move failed low, which means that we may be faced with
            // some kind of threat. If the previous move was reduced, check if
            // the move that refuted the null move was somehow connected to the
            // move which was reduced. If a connection is found, return a fail
            // low score &#40;which will cause the reduced move to fail high in the
            // parent node, which will trigger a re-search with full depth&#41;.
Debugging is twice as hard as writing the code in the first
place. Therefore, if you write the code as cleverly as possible, you
are, by definition, not smart enough to debug it.
-- Brian W. Kernighan