Starting with move ordering.

Discussion of chess software programming and technical issues.

Moderators: hgm, Rebel, chrisw

User avatar
hgm
Posts: 27790
Joined: Fri Mar 10, 2006 10:06 am
Location: Amsterdam
Full name: H G Muller

Re: Starting with move ordering.

Post by hgm »

To explain this in a more verbose way:

In chess you consider a move refuted when the verified result of it is worse than that of a move in the same position, or in any node on the path to it where the same side has the move. In other words, you can avoid a bad situation by deviating as early as you want, not just at the last moment.

So to know if a move at ply 10 is refuted, you would have to compare its score to the score of the best move found so far at ply 10, 8, 6, 4 and 2. In the standard implementation of alpha-beta this is done by keeping this maximum-over-all-levels score in a variable alpha. The variable beta fulfills the same function for the nodes where the opponent has the move (i.e. ply 1, 3, 5, 7, 9).
tpetzke
Posts: 686
Joined: Thu Mar 03, 2011 4:57 pm
Location: Germany

Re: Starting with move ordering.

Post by tpetzke »

In a chess engine most of the move lists to sort are from qSearch nodes (so only tactical moves) and happen to be short. So I implemented in iCE sorting nets for move list up to 12 moves and for longer one I use insert sort.

This seems good enough and is simple and clean. Profiling shows that the engine spends not that much time in sorting moves. At least not enough for me to make the code more complicated for a small speedup.
Thomas...

=======
http://macechess.blogspot.com - iCE Chess Engine
User avatar
Luis Babboni
Posts: 464
Joined: Sat Feb 28, 2015 4:37 pm
Location: Argentina

Re: Starting with move ordering.

Post by Luis Babboni »

Thanks guys!

Now I have serious doubts that I´m doing an alpha beta in other way.... :oops:
I must rethinking in it! 8-)
Sven
Posts: 4052
Joined: Thu May 15, 2008 9:57 pm
Location: Berlin, Germany
Full name: Sven Schüle

Re: Starting with move ordering.

Post by Sven »

Luis Babboni wrote:Thanks guys!

Now I have serious doubts that I´m doing an alpha beta in other way.... :oops:
I must rethinking in it! 8-)
You are certainly doing cutoffs. But as explained above, most probably you are doing only a part of all possible cutoffs, i.e. the "shallow cutoffs". You do not see a difference between "shallow" and "deep" until you start searching at least 4 plies deep, and even for a 4-ply search the difference will only be marginal. But it will become significant for higher depths.

I have run a quick simulation using a recursive tree search with a fixed number of 32 children per node and a pseudo-random evaluation returning integers in a range between -255 and +255. The tree search has two variants, the "shallow" and the "deep" variant (see pseudo code below). I obtained the following results (EBF = effective branching factor, quite high here due to the random evaluation which resembles a "random move ordering"):

Code: Select all

alphaBetaDemo_Shallow
maxDepth= 1: nodes=        33, EBF=33.0
maxDepth= 2: nodes=       427, EBF=20.7
maxDepth= 3: nodes=      7253, EBF=19.4
maxDepth= 4: nodes=     83620, EBF=17.0
maxDepth= 5: nodes=   1613959, EBF=17.4
maxDepth= 6: nodes=  17854503, EBF=16.2
maxDepth= 7: nodes= 115072951, EBF=14.2
maxDepth= 8: nodes=1737425225, EBF=14.3

alphaBetaDemo_Deep
maxDepth= 1: nodes=        33, EBF=33.0
maxDepth= 2: nodes=       399, EBF=20.0
maxDepth= 3: nodes=      7571, EBF=19.6
maxDepth= 4: nodes=     77264, EBF=16.7
maxDepth= 5: nodes=    991682, EBF=15.8
maxDepth= 6: nodes=   7780339, EBF=14.1
maxDepth= 7: nodes=  58833881, EBF=12.9
maxDepth= 8: nodes= 628258266, EBF=12.6
As you can see easily, the "deep" variant outperforms the "shallow" one with increasing search depth.

For those who are interested, here is the pseudo code for the two search variants, where the "shallow" variant should better not be called "alphaBeta...". The difference is only in the recursive call of the search function:

Code: Select all

int alphaBeta_Shallow(int depth, int beta)
{
    ++nodes;
    if (depth == 0) return eval();
    int bestScore = -INFINITE;
    for &#40;int i = 0; i < NumChildren; i++) &#123;
        int score = -alphaBeta_Shallow&#40;depth - 1, -bestScore&#41;;
        if &#40;score >= beta&#41; return score;
        if &#40;score > bestScore&#41; bestScore = score;
    &#125;
    return bestScore;
&#125;

int alphaBeta_Deep&#40;int depth, int alpha, int beta&#41;
&#123;
    ++nodes;
    if &#40;depth == 0&#41; return eval&#40;);
    int bestScore = -INFINITE;
    for &#40;int i = 0; i < NumChildren; i++) &#123;
        int score = -alphaBeta_Deep&#40;depth - 1, -beta, -max&#40;alpha, bestScore&#41;);
        if &#40;score >= beta&#41; return score;
        if &#40;score > bestScore&#41; bestScore = score;
    &#125;
    return bestScore;
&#125;
User avatar
Luis Babboni
Posts: 464
Joined: Sat Feb 28, 2015 4:37 pm
Location: Argentina

Re: Starting with move ordering.

Post by Luis Babboni »

Sven Schüle wrote: ...
You do not see a difference between "shallow" and "deep" until you start searching at least 4 plies deep
...
I starting trying to see if my method need to study more endnodes than alpha beta method.

But what you said about need at least 4 plies deep to see it means this video is useless for my purpose?:
https://www.youtube.com/watch?v=xBXHtz4Gbdo
Sven
Posts: 4052
Joined: Thu May 15, 2008 9:57 pm
Location: Berlin, Germany
Full name: Sven Schüle

Re: Starting with move ordering.

Post by Sven »

Luis Babboni wrote:
Sven Schüle wrote:...
You do not see a difference between "shallow" and "deep" until you start searching at least 4 plies deep
...
I starting trying to see if my method need to study more endnodes than alpha beta method.
Everyone would be surprised if it wouldn't, given a sufficient search depth.
Luis Babboni wrote:But what you said about need at least 4 plies deep to see it means this video is useless for my purpose?:
https://www.youtube.com/watch?v=xBXHtz4Gbdo
It is not "useless", I think it explains alpha-beta pruning (based on minimax) quite well in general (and there are certainly other good videos for that topic as well). Most of these typical alpha-beta examples, however, work with a tree that is at most 3 plies deep, for simplicity of presentation. But you won't see any "deep cutoffs" if the longest path from the root to any leaf node is smaller than 4. Maybe someone can compile (or link to) an example that shows a deep cutoff in a 4-ply tree.

The CPW page about beta cutoffs says:
A deep cutoff occurs if the window was narrowed closer to the root with an odd ply-distance of at least three to the current node.
That is exactly the reason why you need a minimum depth of 4, since a cutoff decision at the current node is only possible if there is at least one more ply to be searched, and the current node itself is at least 3 plies away from the root node.
User avatar
Luis Babboni
Posts: 464
Joined: Sat Feb 28, 2015 4:37 pm
Location: Argentina

Re: Starting with move ordering.

Post by Luis Babboni »

Well!!! :D

Yes, I was doing just shallow cutoff!! :oops:
I still use my old system but instead of comparing last node with previous node, now I compared it with all previous, stepped by 2, nodes.
That is if I´m at depth 4, I compare the actual node value at depth 4 with nodes values at depth 3 and 1 and if I´m at depth 5, I compare the actual node value at deph 5 with nodes values at depth 4 and 2 in previous branches to know if continue or not with next leafnode in the same branch I am.

Could you with this too poor explanation see if is Soberango doing deepcutoff now?
Here an image of the diference between what I call (not know if correctly or not) "shallow cutoff" and "deep cutoff":

https://dl.dropboxusercontent.com/s/o85 ... w.jpg?dl=0

White numbers on the right are values of all nodes at depth 4 with just 2 "moves" at each depth.
Green righter numbers are evaluated nodes, the total number appears at bottom.
Lefter green numbers are the way I was selected results to know the final valuation of the position if searching till depth 4.
One to last at bottom is that final evaluation.

An anxious (or sluggard) question.... deep cutoff is the same as alpha beta? This shows that I still do not understand alpha beta idea it seems. :oops:

Running the program to made the picture above that just work for depth 4 and 2 values at each depth, I think difference in the number of nodes evaluated between "shallow cutoff" and "deepcutoff" are around:
50%: null
40%: 1 node
10% 2 nodes.

I´m running right now a match between soberango with "shallow cutoff" and "deep cutoof" and actual partial result is:
6-5-2 for "shallow cutoff" :? (if there is any difference, seems to be small).
At the begining the "deep cutoff" version use to reach one ply depther in search and at endgames, with few pieces, "shallow cutoff" reach one ply depther! :shock: May be my way to implement "deep cutoff" is too bad and slow. :oops:

[/url]
Sven
Posts: 4052
Joined: Thu May 15, 2008 9:57 pm
Location: Berlin, Germany
Full name: Sven Schüle

Re: Starting with move ordering.

Post by Sven »

Luis Babboni wrote:Could you with this too poor explanation see if is Soberango doing deepcutoff now?
It might be correct but I think I can't tell exactly from that explanation, I can only explain again with my words how it should work: You are doing "deep cutoffs" if you are using a window of two bounds (usually called alpha and beta) at each depth, where one bound (alpha) is the worst case for the maximizer and the other (beta) is the worst case for the minimizer. At any node you pass the alpha-beta window to the next level down the tree after making a move.

At a "MAX" node the maximizer can stop trying further moves if he found one move that gets a score >= beta, since that is sufficient to refute the previous move of the minimizer. Otherwise, finding a move that gets a score > alpha raises alpha and thus narrows the window for the remaining moves.

At a "MIN" node the minimizer can stop trying further moves if he found one move that gets a score <= alpha, since that is sufficient to refute the previous move of the maximizer. Otherwise, finding a move that gets a score < beta lowers beta and thus narrows the window for the remaining moves.

The key idea of alpha-beta is to ignore all branches (moves) whose score can't have any influence on the score of the root node. These scores are "outside the window", either <= alpha or >= beta. Shallower alpha-beta windows help to ignore more branches.

Using Negamax instead of Minimax makes the whole thing a lot simpler since all nodes are "MAX" nodes. Using a recursive algorithm instead of an iterative one will simplify your task even a bit more.
Luis Babboni wrote:An anxious (or sluggard) question.... deep cutoff is the same as alpha beta? This shows that I still do not understand alpha beta idea it seems. :oops:
See it like this: you can only get deep cutoffs if you use two bounds alpha and beta, not with only one bound. So an algorithm that uses only one bound is not the "alpha-beta" algorithm.
Luis Babboni wrote:I´m running right now a match between soberango with "shallow cutoff" and "deep cutoof" and actual partial result is:
6-5-2 for "shallow cutoff" :? (if there is any difference, seems to be small).
At the begining the "deep cutoff" version use to reach one ply depther in search and at endgames, with few pieces, "shallow cutoff" reach one ply depther! :shock: May be my way to implement "deep cutoff" is too bad and slow. :oops:
13 games is by far not sufficient to draw *any* conclusions about playing strength, that is just guesswork, unless the result is close to 100% for one side. You certainly would need to run many hundreds of games, maybe even a few thousand. The reason is a statistical one: you could derive an estimated Elo rating difference from the result but the statistical error would be much higher than that difference in most cases with such a low number of games, so you would know nothing. To get a useful result in an acceptable amount of time these games need to be very fast, like few seconds per game.

But maybe it is not even necessary to play many games to get a satisfying answer to your actual question. What you want to know is in fact whether you have implemented your change correctly, not whether a correct "deep cutoff" version plays stronger than a correct "shallow cutoff" version (you know, or need to believe, that the latter is true). Correctness can also be checked empirically by letting old and new version search a given set of test positions to a fixed depth that is sufficient for this kind of problem (i.e., certainly more than 4 to see any difference - choose a depth that lets your engine complete the search very quickly). The change is correct if the new implementation always plays the same move while searching less nodes in all given positions, where "less" should be "considerably less" in most cases, depending on the search depth.
User avatar
Luis Babboni
Posts: 464
Joined: Sat Feb 28, 2015 4:37 pm
Location: Argentina

Re: Starting with move ordering.

Post by Luis Babboni »

Thanks Sven!

Did you see and understand my image?
Could when you have time make "by hand" the same tree and say wichs leaf nodes are evaluated using alpha beta?

Thanks in advance!

PS: 21-21-7 actually deep vs shallow. :?
Sven
Posts: 4052
Joined: Thu May 15, 2008 9:57 pm
Location: Berlin, Germany
Full name: Sven Schüle

Re: Starting with move ordering.

Post by Sven »

Luis Babboni wrote:Thanks Sven!

Did you see and understand my image?
Could when you have time make "by hand" the same tree and say wichs leaf nodes are evaluated using alpha beta?
Sorry, I have seen it but I do not fully understand it. In which order are nodes evaluated, top-down or bottom-up? Usually we go left to right ... Also I do not understand why there are some gaps and why there is no visible root node.

There are lots of examples for alpha-beta trees which are far better than I could create so you'd better rely on these instead ;-)