Page 4 of 7

Re: Starting with quiescence search

Posted: Wed Aug 24, 2016 1:09 pm
by Luis Babboni
Thanks Robert!
Yes, I was recommended to not include checks for the moment so I just included captures.

Thanks HG!
About TT: is one of the things I was want (now I have doubts) to deal with before starting a new code to be more prepared and not did fool things that will obligue me to start again a new code as seems is happens now with move ordering.
About IID: not sure if I understand. First I simply start with ply 1 depth following by ply 2 depth and so in the same way as I did before with any depth n. Later (was my last change from version 0.06 to actual version 0.07) I started each ply n depth search with the best move of ply n-1 depth search. Is this last IID?

Re: Starting with quiescence search

Posted: Wed Aug 24, 2016 3:16 pm
by Luis Babboni
Sorry, IID is not what I thought.

I decided to work in add move ordering to Soberango before add QS then.
I think I´ll could do it without need to rewrite the code! :)

Re: Starting with quiescence search

Posted: Thu Aug 25, 2016 3:25 pm
by Luis Babboni
Hi people, sorry to bother you again. :?

I´m starting fighting with move ordering and I want to be sure I do not missunderstand something.
So, if you could and want, when you have time.... I will thanksfull if you take a brief look to this:

For simplicity supousse instead of around 30 possible moves from each node, we have just 2. See the image:
Image
The images not work?
https://dl.dropboxusercontent.com/s/bxj ... c.jpg?dl=0


Actually my engine work in this way without any move ordering more than this: node 1 is the best move found in the ply 2 search did before in iterative deepening.
-Generate node 1
-Generate node 3
-Generate node 7
-Evaluate node 7 (+100)
-Generate node 3
-Generate node 8
-Evaluate node 8 (+50)
-Generate node 3
-Generate node 1
-Generate node 4
-Generate node 9
-Evaluate node 9 (x)
-Generate node 4
-if x>50 then ´(The "alphabeta" do this analisys)
..............Generate node 10.
..............Evaluate node 10.
..............Generate node 4.
Endif
-Generate node 1
-Generate node 0
-Generate node 2
etc.
So, thanks to the "alphabeta", if x <=50, I never needed to generate and evaluate node 10.

So now the questions:


-About "alpha Beta": I do not know why I need any other value than, for the moment, the 50 obteined evaluating node 8. If node 9 and node 10 evaluations are greater than 50, say 120 and 80, the new only value I will store will be the 80, but I never need two values. :?

-About move ordering: suppouse node 1 and node 2 you have yet ordered cause best move from previous depth iteration, the idea is to work this way?:
-Generate node 1.
-Generate node 3.
-Generate node 1.
-Generate node 4.
-Ordering node 3 and node 4
-if node 3 is before node 4 then
............Generate node 7
........... Generate node 3
........... Generate node 8
............Ordering node 7 and node 8
............ if node 7 is before node 8 then
..............................Generate node 3
..............................Generate node 7
..............................Evaluate node 7 (+100)
..............................Generate node 3
..............................Generate node 8
..............................Evaluate node 8 (+50)
Generate node 3
Generate node 1
Generate node 4
Generate node 9
Generate node 4
Generate node 10 ´That I never need to generate with my way to work!
Order node 9 and node 10
If node 9 is before node 10 then....
etc.

Showed this way, seems the second option is far more long.... but:
It is supoussed the time you first lost generating all the nodes first will be far less than the time you will gain finding that good value I talk before early?
Could be possible that in my actual point of the evaluation, I just count material, for my engine the important time consuming is in the nodes generation but in most advanced engines the time consuming is in the evaluation and for example with bitboards the nodes generation time become meanless compared to the evaluation time?

I hope I could explain myself and, of course, thanks for your time in case you take it to help me.

Best regards!

Re: Starting with quiescence search

Posted: Thu Aug 25, 2016 4:53 pm
by Sven
Luis Babboni wrote:I´m starting fighting with move ordering and I want to be sure I do not missunderstand something.
[...]
Actually my engine work in this way without any move ordering more than this: node 1 is the best move found in the ply 2 search did before in iterative deepening.
[...]
So, thanks to the "alphabeta", if x <=50, I never needed to generate and evaluate node 10.
So now the questions:

-About "alpha Beta": I do not know why I need any other value than, for the moment, the 50 obteined evaluating node 8. If node 9 and node 10 evaluations are greater than 50, say 120 and 80, the new only value I will store will be the 80, but I never need two values. :?

-About move ordering: suppouse node 1 and node 2 you have yet ordered cause best move from previous depth iteration, the idea is to work this way?:
Before replying to this I have one question. Are your scores from white viewpoint (minimax) or from the moving side's viewpoint (negamax)? So far I understood that you are still using minimax. But if that is the case then the comparison of nodes 7 and 8 from your example would prefer node 7 evaluated as +100 (for White) over node 8 (only +50), since it is White to move at the parent node 3. However, you are using the +50 from node 8 as the score to compare against so something must be wrong here. With negamax it would be wrong as well (+100 > +50 from viewpoint of moving side).

Maybe I missed something in your explanation. But it is crucial to get these things right before we talk about your questions.

Re: Starting with quiescence search

Posted: Thu Aug 25, 2016 5:09 pm
by Luis Babboni
Sven Schüle wrote:
Luis Babboni wrote:I´m starting fighting with move ordering and I want to be sure I do not missunderstand something.
[...]
Actually my engine work in this way without any move ordering more than this: node 1 is the best move found in the ply 2 search did before in iterative deepening.
[...]
So, thanks to the "alphabeta", if x <=50, I never needed to generate and evaluate node 10.
So now the questions:

-About "alpha Beta": I do not know why I need any other value than, for the moment, the 50 obteined evaluating node 8. If node 9 and node 10 evaluations are greater than 50, say 120 and 80, the new only value I will store will be the 80, but I never need two values. :?

-About move ordering: suppouse node 1 and node 2 you have yet ordered cause best move from previous depth iteration, the idea is to work this way?:
Before replying to this I have one question. Are your scores from white viewpoint (minimax) or from the moving side's viewpoint (negamax)? So far I understood that you are still using minimax. But if that is the case then the comparison of nodes 7 and 8 from your example would prefer node 7 evaluated as +100 (for White) over node 8 (only +50), since it is White to move at the parent node 3. However, you are using the +50 from node 8 as the score to compare against so something must be wrong here. With negamax it would be wrong as well (+100 > +50 from viewpoint of moving side).

Maybe I missed something in your explanation. But it is crucial to get these things right before we talk about your questions.

Naaaa, so sorry.... I posted it well first and delete the post and replaced by a wrong one!!!! :oops: what a waste of your time! :oops:

First question: I count points from white side.

Engine must store node 7´s value (+100)
So what I understand as the "alphabeta idea" works in a way that just generate and evaluate node 10 in case node 9 value is less than 100 cause in case is >=100, blacks do never choice node 4 cause the final score will be at least as good (for whites) as the best if its choice node 3.

I think, and hope, now I did the things right! :oops:

Here I positioned better in the draw the side to move to not confuse myself!:
https://dl.dropboxusercontent.com/s/ab6 ... c.jpg?dl=0

Re: Starting with quiescence search

Posted: Thu Aug 25, 2016 5:27 pm
by Luis Babboni
Mmmmm, I think I become to understadn why two values!!!
The 100 (alpha) is the value for the jump from depth 2 to depth 3.... Beta is the value for the jump between depth 1 to depth 2?
In fact now I see I do not store one value, neither 2.... I store one value for each jump between depths!!!

I´m called to lunch.... I´ll be more clear later! :P

Re: Starting with quiescence search

Posted: Thu Aug 25, 2016 5:50 pm
by AlvaroBegue
Luis Babboni wrote:Mmmmm, I think I become to understadn why two values!!!
The 100 (alpha) is the value for the jump from depth 2 to depth 3.... Beta is the value for the jump between depth 1 to depth 2?
In fact now I see I do not store one value, neither 2.... I store one value for each jump between depths!!!

I´m called to lunch.... I´ll be more clear later! :P
The way I think about it, the recursive call in an alphabeta search is saying: "Give me the minimax score of this node if it is between alpha and beta; if it's alpha or less, I don't care exactly what it is, because the active player (the one whose turn it is on the given position) will avoid this branch and the score at the root will be unaffected; similarly if it's beta or more, the inactive player will avoid this branch and the score at the root will be unaffected."

Does that make things any more clear?

Re: Starting with quiescence search

Posted: Thu Aug 25, 2016 7:08 pm
by Luis Babboni
I added a new column on the left to my draw:
https://dl.dropboxusercontent.com/s/8yk ... 2.jpg?dl=0
(Sorry to ask again, there is no way to insert images? The Img button seems to not work for me.)

I´ll try to explain how my engine works:

-set flag AB to 1
-if AB=1 then generate node/position 1 (move)
-if AB=1 then generate node 3 (move)
-if AB=1 then generate node 7 (move)
-evaluate node 7 (+100)
-store value 100 at level 3.
-generate node 3 (unmove)
-if store at level 2 is empty, copy stored value at level 3 to level 2.
-if AB=1 then generate node 8 (move)
-evaluate node 8 (+50)
-generate node 3 (unmove)
-Cause level 2 store si not empty and is white turn and 50<=stored value at this level (100), do not change the value stored at level 2.
-generate node 1 (unmove)
-Cause store level at level 1 is empty, copy stored value in level 2 to level 1 store.
-if AB=1 then generate node 4 (move)
-if AB=1 then generate node 9 (move)
-evaluate node 9 (X)
-store value X at level 3
-generate node 4 (unmove)
-if stored value at level 3 (X) is > stored value at level 2 (100) then set flag AB to 0 cause black will never choice node 4 instead of node 3 cause in next move whites could choice node 9.

-Cause AB=0, instead of move from node 4 to node 10, the engine unmove to node 1. And never need to even generate node 10.
-Generate node 0. (unmove)
-Cause store at level 0 is empty, copy stored value at level 1 to level 0.
-set AB to 1.
-generate node 2. (move)
...

Each time a new value is stored in level 0, the best line is actualized with the line from actual position to the position with this new value.

The bold line is my "alpha beta" without beta! :?

Re: Starting with quiescence search

Posted: Thu Aug 25, 2016 9:10 pm
by Sven
Hi Luis,

I'm afraid it gets more complicated with each new post ... I was already about to reply to your questions, then I received an "interrupt", now I'm back and see what you wrote in the meantime.

So let me first reply to your question regarding move ordering. Afterwards I'll adress the alpha-beta topic which needs quite a different approach in your case, in my opinion.

- Move ordering based on PV: You cannot store the whole tree that you have searched during some iteration N, it is simply too large. What you store is at least the best root move, or even better, the PV of the root node. You can use this to start iteration N+1 by searching that PV first: at root (ply 0), search PV[0] first, and once you are there, search PV[1] first, and so on. This will only affect the "leftmost" branches of your tree (if you look at your own "move ordering picture" you will understand what I mean). Whenever at some node you have finished searching the PV node and continue with the next move you can't use anything from that "root PV" any longer. I guess you know that already.

Apart from this move ordering based on the PV of the previous iteration, there are also other kinds of move ordering tools that are based on information saved during previous parts of the search, e.g. the transposition table, killer moves, or history heuristics. I guess you are currently using none of these.

So when talking about move ordering, everything else is about "static move ordering" based solely on characteristics of the given move and the current board. Usually we prefer captures (and promotions) over quiet moves, and among captures we prefer capturing the opponent's higher-valued pieces over the lower-valued ones, and if two moves are still equal then we prefer capturing with our least-valued aggressor. That's all about MVV/LVA ordering which we usually apply to all capture moves.

So what you should try to achieve, in your current state, is:

a) If you are on the (small) path given by the saved PV of the previous iteration then try the PV move belonging to the current node first.

b) Otherwise, try all captures prior to all other moves, obeying the MVV/LVA order for captures as described above. I would try non-capturing promotions after capturing any knights/bishops but before capturing any pawns, but this can also be done differently. (Don't assume that a promotion wins a queen, though.) The order of all other (quiet) moves does not matter much for you in the beginning. But trying captures first is crucial.


- Now the other topic: alpha-beta. First of all: it is almost completely unrelated to the topic of "move ordering" so you should really see this in a separate context. The alpha-beta algorithm follows the idea to reduce the number of nodes needed to search a game tree to the minimum that is required to get the correct minimax score of the root node (where "minimax" could of course be replaced by "negamax"). Its success, in terms of shrinking tree size, is greater with better move ordering (so that is the only relation to move ordering). If you search the tree by always trying the worst move first, then the second-worst etc. and finally the best one, then alpha-beta will save nothing and will search the same tree as without alpha-beta. The reason is that alpha-beta aims at avoiding to search nodes for which it is already known that their score will not affect the score of the root node. But if each move is better than its predecessor then no cutoffs are possible.

And now the bad news for you. It is hardly possible for me to walk through your example in all details, nor can I review the quite complex method you are using for your search. The reason is simple: you are doing things much too complicated. The good news is: you can do better :-) What you are currently doing in your tree search can - according to my current knowledge about your program - be summarized as follows:

- you use minimax (most people use negamax),
- you deviate somehow from the well-known alpha-beta algorithm,
- your search is iterative (most people use a recursive search which is also much better regarding readability and less complex for beginners),
- and you do not store generated moves in a move list but immediately make each move that you just generated, which makes any kind of static move ordering much harder.

So there are a couple of things which could make your chess-programming life easier ...

A good answer to your specific question regarding alpha-beta and why two bounds are needed can be found at https://chessprogramming.wikispaces.com/Alpha-Beta:
The situation becomes even more complicated, however, when we go to a search depth of 3 or greater, because now both players can make choices affecting the game tree. Now we have to maintain both a lower bound and an upper bound (called Alpha and Beta.) We maintain a lower bound because if a move is too bad we don't consider it. But we also have to maintain an upper bound because if a move at depth 3 or higher leads to a continuation that is too good, the other player won't allow it, because there was a better move higher up on the game tree that he could have played to avoid this situation. One player's lower bound is the other player's upper bound.
Nevertheless, I would add that the explanation why we need a lower bound is not precise enough, for my taste. At the current node the lower bound "Alpha" (in case of Minimax it is "Alpha" for White and "Beta" for Black) is simply needed as upper bound for the opponent, while the upper bound is used directly to test for a cutoff: if a subtree returns a score that is beyond our current upper bound (in case of Minimax: >= Beta for White or <= Alpha for Black!) then we cut off all remaining subtrees since we know that we will return to our parent node with a score that is not better than the opponent's lower bound, so it will have no effect. To be able to do the same thing one ply deeper you use two bounds, one is saved for the opponent's viewpoint at the deeper node and the other is used at the current node.

I hope this helps - maybe things will become clearer when you rewrite your program, and "do it the easy way" this time ;-) In fact recursive alpha-beta with Negamax is essentially not much more than this:

Code: Select all

int AlphaBeta&#40;Position pos, int alpha, int beta, int depth&#41; &#123;
    if &#40;depth == 0&#41; return Eval&#40;pos&#41;; // or QSearch&#40;pos&#41;;
    GenerateMoves&#40;pos&#41;; // store moves in move list
    for &#40;all moves in move list&#41; &#123;
        MakeMove&#40;pos, move&#41;;
        int score = -AlphaBeta&#40;pos, -beta, -alpha, depth - 1&#41;;
        UnmakeMove&#40;pos, move&#41;;
        if &#40;score >= beta&#41; return beta;
        if &#40;score > alpha&#41; alpha = score;
    &#125;
    return alpha;
&#125;

Re: Starting with quiescence search

Posted: Thu Aug 25, 2016 10:38 pm
by Sven
Sven Schüle wrote:In fact recursive alpha-beta with Negamax is essentially not much more than this:

Code: Select all

int AlphaBeta&#40;Position pos, int alpha, int beta, int depth&#41; &#123;
    if &#40;depth == 0&#41; return Eval&#40;pos&#41;; // or QSearch&#40;pos&#41;;
    GenerateMoves&#40;pos&#41;; // store moves in move list
    for &#40;all moves in move list&#41; &#123;
        MakeMove&#40;pos, move&#41;;
        int score = -AlphaBeta&#40;pos, -beta, -alpha, depth - 1&#41;;
        UnmakeMove&#40;pos, move&#41;;
        if &#40;score >= beta&#41; return beta;
        if &#40;score > alpha&#41; alpha = score;
    &#125;
    return alpha;
&#125;
In contrast to this, an iterative alpha-beta function with Minimax would perhaps look like this (not compiled, not tested!):

Code: Select all

int AlphaBeta_Iterative_Minimax&#40;Position pos, int maxDepth&#41;
&#123;
    enum Operation &#123; GENERATE, MAKE, UNMAKE &#125;;

    Operation op;                       // current state of state machine indicating what to do next
    int ply;                            // distance to root node
    int depth;                          // remaining depth until reaching full-width horizon
    Move moveList&#91;MAX_PLY&#93;&#91;MAX_MOVES&#93;;  // move list per ply
    int numberOfMoves&#91;MAX_PLY&#93;;         // size of move list per ply
    int moveIndex&#91;MAX_PLY&#93;;             // index of current move in move list per ply
    int alpha&#91;MAX_PLY&#93;;                 // Alpha bound per ply
    int beta &#91;MAX_PLY&#93;;                 // Beta bound per ply
    int score&#91;MAX_PLY&#93;;                 // current score per ply

    // start at root
    ply = 0;
    depth = maxDepth;
    alpha&#91;ply&#93; = -INFINITE;
    beta &#91;ply&#93; = +INFINITE;
    op = GENERATE;

    while &#40;not&#40;op == UNMAKE and ply == 0&#41;) &#123;
        switch &#40;op&#41; &#123;
            case GENERATE&#58;
            &#123;
                if &#40;depth == 0&#41; &#123;
                    // leaf node, return evaluation to parent
                    score&#91;ply&#93; = Eval&#40;pos&#41;; // or QSearch&#40;pos&#41;
                    op = UNMAKE;
                &#125; else &#123;
                    GenerateMoves&#40;pos, moveList&#91;ply&#93;);
                    moveIndex&#91;ply&#93; = 0;
                    op = MAKE;
                &#125;
                break;
            &#125;

            case MAKE&#58;
            &#123;
                if &#40;moveIndex&#91;ply&#93; < numberOfMoves&#91;ply&#93;) &#123;
                    // try next move from move list
                    MakeMove&#40;pos, moveList&#91;ply&#93;&#91;moveIndex&#91;ply&#93;&#93;);
                    ++ply;
                    --depth;
                    op = GENERATE;
                &#125; else &#123;
                    // no moves left in move list, return best score to parent
                    if &#40;pos.side == WHITE&#41; &#123;
                        // Maximizer
                        score&#91;ply&#93; = alpha&#91;ply&#93;;
                    &#125; else &#123;
                        // Minimizer
                        score&#91;ply&#93; = beta&#91;ply&#93;;
                    &#125;
                    op = UNMAKE;
                &#125;
                break;
            &#125;

            case UNMAKE&#58;
            &#123;
                ++depth;
                --ply;
                UnmakeMove&#40;pos, moveList&#91;ply&#93;&#91;moveIndex&#91;ply&#93;&#93;);

                // back up score returned from subtree
                int tempScore = score&#91;ply + 1&#93;;
                if &#40;pos.side == WHITE&#41; &#123;
                    // Maximizer
                    if &#40;tempScore >= beta&#91;ply&#93;) &#123;
                        // cutoff, return bound to parent
                        score&#91;ply&#93; = beta&#91;ply&#93;;
                        op = UNMAKE;
                    &#125; else &#123;
                        if &#40;tempScore > alpha&#91;ply&#93;) &#123;
                            alpha&#91;ply&#93; = tempScore;
                        &#125;
                        ++moveIndex&#91;ply&#93;;
                        op = MAKE;
                    &#125;
                &#125; else &#123;
                    // Minimizer
                    if &#40;tempScore <= alpha&#91;ply&#93;) &#123;
                        // cutoff, return bound to parent
                        score&#91;ply&#93; = alpha&#91;ply&#93;;
                        op = UNMAKE;
                    &#125; else &#123;
                        if &#40;tempScore < beta&#91;ply&#93;) &#123;
                            beta&#91;ply&#93; = tempScore;
                        &#125;
                        ++moveIndex&#91;ply&#93;;
                        op = MAKE;
                    &#125;
                &#125;
                break;
            &#125;
 
            default&#58;
                break;
        &#125;
    &#125;
    return score&#91;ply&#93;;
&#125;
It could be done shorter, maybe with only two instead of three states, but certainly not by an order of magnitude.

To be fair, the recursive Negamax non-pseudocode version with most required variables would be closer to this one (although "ply" is actually unused but will be required e.g. for mate scoring which is omitted here):

Code: Select all

int AlphaBeta&#40;Position pos, int alpha, int beta, int ply, int depth&#41; &#123;
    if &#40;depth == 0&#41; return Eval&#40;pos&#41;; // or QSearch&#40;pos&#41;;
    Move moveList&#91;MAX_MOVES&#93;;
    int numberOfMoves = GenerateMoves&#40;pos, moveList&#41;; // store moves in move list
    for &#40;int i = 0; i < numberOfMoves; i++) &#123;
        Move move = moveList&#91;i&#93;;
        MakeMove&#40;pos, move&#41;;
        int score = -AlphaBeta&#40;pos, -beta, -alpha, ply + 1, depth - 1&#41;;
        UnmakeMove&#40;pos, move&#41;;
        if &#40;score >= beta&#41; return beta;
        if &#40;score > alpha&#41; alpha = score;
    &#125;
    return alpha;
&#125;
Still considerably shorter and easier to understand.