if (score > alpha)
{
best_sofar = xxx;
if (score >= beta) return score; // or return beta for fail hard
alpha = score;
}
Probably better to introduce a ply counter as well, since you only have to update the best_move at ply 0, I assume best_move is the move that you want to play eventually.
if (score > alpha)
{
best_sofar = xxx;
if (ply == 0) best_move = best_sofar;
if (score >= beta) return score; // or return beta for fail hard
alpha = score;
}
Unfortunately it works the same as the current version.
In mate in 1 position both versions solve mate in 1 in the depth range 2-5 but starting from depth 6 it doesn't mate.
ANYWAY I want to thank everyone for taking time on explanations.
I guess my main problem is a LACK OF UNDERSTANDING THE ALGORITHM rather than not knowing how to apply it, so I will go studying negamax search first, then alpha beta pruning and only then I'd get back to this issue.
Again thank you all and sorry for torturing community members without having even a basic understanding of algorithm itself.
Now I know the direction - just work this algorithm out until I get it in great details and then try to apply it to chess.
if (score > alpha)
{
best_sofar = xxx;
if (score >= beta) return score; // or return beta for fail hard
alpha = score;
}
Probably better to introduce a ply counter as well, since you only have to update the best_move at ply 0, I assume best_move is the move that you want to play eventually.
if (score > alpha)
{
best_sofar = xxx;
if (ply == 0) best_move = best_sofar;
if (score >= beta) return score; // or return beta for fail hard
alpha = score;
}
Unfortunately it works the same as the current version.
In mate in 1 position both versions solve mate in 1 in the depth range 2-5 but starting from depth 6 it doesn't mate.
ANYWAY I want to thank everyone for taking time on explanations.
I guess my main problem is a LACK OF UNDERSTANDING THE ALGORITHM rather than not knowing how to apply it, so I will go studying negamax search first, then alpha beta pruning and only then I'd get back to this issue.
Again thank you all and sorry for torturing community members without having even a basic understanding of algorithm itself.
Now I know the direction - just work this algorithm out until I get it in great details and then try to apply it to chess.
If you fix the two main issues that I mentioned previously (best_move being a global variable and missing control of mate distance) then it should work much better ...
Of course, when having "ply" as additional parameter you could also fix the "best_move" problem by restricting access to it to the case "ply == 0". I just did not suggest it this way because
a) I did not want to mix the two different problems and
b) in general I think that global variables should be avoided anyway, especially for topics like this one.
Apart from the two main issues I mentioned, your negamax and alpha-beta implementation looks correct to me. It will certainly help to understand the underlying algorithms but that alone will probably not solve the concrete problem you have.
Joost Buijs wrote: ↑Thu Aug 06, 2020 4:46 pm
One of the things I notice is that you don't update the best move when there is a beta cut-off, but if everything else is right it should still return the mate score.
Are you sure that the move-generator, the check for illegal moves and the is_square_attacked function are working in the way they should?
re: Are you sure that the move-generator, the check for illegal moves and the is_square_attacked function are working in the way they should?
Yes, absolutely because I've passed the perft test.
re: One of the things I notice is that you don't update the best move when there is a beta cut-off
- Should I? Sorry for a dumb question, but my understanding of beta cutoffs is limited like... skip calculating the branch because it gives a loosing score. So I'm not sure what exactly it returns.
// fail hard beta-cutoff
if (score >= beta)
{
best_move = best_so_far;
return beta;
}
So can I say that the current version would work in brute force negamax when we associate bestscore (alpha equivalent) with best move?
"best_move" does not need to be updated on a beta cutoff since the parent node will discard the move leading to this subtree (that is the subject of a cutoff) so our "best_move" would not be propagated towards the root, it will never be part of the PV.
This might be so, but you'll need it if this is the root-node and there is no parent node, which is absolutely not clear from this example.
You can argue that there will never be a beta cutoff at the root, but it solely depends upon how you handle the root.
You are right for a search with aspiration windows, I assumed they are not used in this case. When always searching the root with a full window there will never be a beta cutoff at the root.
I think from a "didactical" viewpoint it might be important to state that moves with a score outside the window never belong to the PV, even at the root. That you might also want to play a "best move" on the board if it fails high at the root and you run out of time before your re-search delivers something even better could be seen as an "advanced" thought. And furthermore you might also want to store a best move in the TT even if it fails high - but again: there is also no TT yet in the given example. In my opinion, if you start with a very simple search without any special features then the code can be really minimal, containing only those parts that are needed.
Sven Schüle (engine author: Jumbo, KnockOut, Surprise)
if (score > alpha)
{
best_sofar = xxx;
if (score >= beta) return score; // or return beta for fail hard
alpha = score;
}
Probably better to introduce a ply counter as well, since you only have to update the best_move at ply 0, I assume best_move is the move that you want to play eventually.
if (score > alpha)
{
best_sofar = xxx;
if (ply == 0) best_move = best_sofar;
if (score >= beta) return score; // or return beta for fail hard
alpha = score;
}
Unfortunately it works the same as the current version.
In mate in 1 position both versions solve mate in 1 in the depth range 2-5 but starting from depth 6 it doesn't mate.
ANYWAY I want to thank everyone for taking time on explanations.
I guess my main problem is a LACK OF UNDERSTANDING THE ALGORITHM rather than not knowing how to apply it, so I will go studying negamax search first, then alpha beta pruning and only then I'd get back to this issue.
Again thank you all and sorry for torturing community members without having even a basic understanding of algorithm itself.
Now I know the direction - just work this algorithm out until I get it in great details and then try to apply it to chess.
If you fix the two main issues that I mentioned previously (best_move being a global variable and missing control of mate distance) then it should work much better ...
Of course, when having "ply" as additional parameter you could also fix the "best_move" problem by restricting access to it to the case "ply == 0". I just did not suggest it this way because
a) I did not want to mix the two different problems and
b) in general I think that global variables should be avoided anyway, especially for topics like this one.
Apart from the two main issues I mentioned, your negamax and alpha-beta implementation looks correct to me. It will certainly help to understand the underlying algorithms but that alone will probably not solve the concrete problem you have.
Mate distance solution actually solved my issue! Thank you so much!
I understand that global variables are bad idea, but let's say we drop alpha beta and leave the brute force negamax (this time I wrote it myself, no copypaste):
But what does best_move contain? I am pretty sure that it will show you a random move from somewhere in the tree, maybe even an opponent's move. Unless the best root move is always tried last ...
Sven Schüle (engine author: Jumbo, KnockOut, Surprise)
I there is a mate in 1, and you do not get a mate score at any depth, there must be something wrong with your search. There doesn't seem to be anything wrong with the code you showed, score-wise. So the problem must be that you are not searching the correct tree.
As always, staring at the code offers almost no chance to solve it. Selectively print the moves & scores in the nodes that do not find the correct score, and you will see what the problem is. If there is a mate-in-1, and you do not find it, that should not be very many nodes: only the root and the node after the mating move.
hgm wrote: ↑Fri Aug 07, 2020 8:49 am
I there is a mate in 1, and you do not get a mate score at any depth, there must be something wrong with your search. There doesn't seem to be anything wrong with the code you showed, score-wise. So the problem must be that you are not searching the correct tree.
As always, staring at the code offers almost no chance to solve it. Selectively print the moves & scores in the nodes that do not find the correct score, and you will see what the problem is. If there is a mate-in-1, and you do not find it, that should not be very many nodes: only the root and the node after the mating move.
The main issue is solved already, read ahead ...
Sven Schüle (engine author: Jumbo, KnockOut, Surprise)
Not really. The issue was that it did not (always) get a mate score when there is a mate in 1, without accounting for mating distance. That means there must be a search bug. Accounting or mating distance will not remove that search bug. But it can of course mask it, like any change in evaluation can mask search bugs by pushing their occurence into a pruned line.
The search still is exactly as buggy as it was before, because awarding a wrong mate score is not a search bug.
int search(int alpha, int beta, int depth, int ply)
{
if (depth <= 0)
return evaluate_position();
int best_score = -50000 + ply; // mate distance
int best_sofar;
int legal_moves = 0;
moves move_list[1];
generate_moves(move_list);
copy_board();
for (int count = 0; count < move_list->count; count++)
{
if (!make_move(move_list->moves[count], all_moves))
continue;
legal_moves++;
int score = -search(-beta, -alpha, depth - 1, ply + 1);
take_back();
if (score > best_score)
{
best_score = score;
best_sofar = move_list->moves[count];
if (ply == 0) best_move = best_sofar;
if (score > alpha)
{
alpha = score;
if (alpha >= beta) return beta;
}
}
}
if (legal_moves == 0 && !is_square_attacked(king_square[side], side ^ 1))
best_score = 0;
return best_score;
}
Call it with search(-50000, 50000, depth, 0);
If this doesn't work then there must be something else wrong and the problem is not related to the search.
Notice that you also have to call copy_board() once, it shouldn't be necessary to copy the board before for each move.
Personally I don't like expressions like (!depth) and (!legal_moves), but this is a matter of taste.
maksimKorzh wrote: ↑Fri Aug 07, 2020 6:13 am
Mate distance solution actually solved my issue! Thank you so much!
I understand that global variables are bad idea, but let's say we drop alpha beta and leave the brute force negamax (this time I wrote it myself, no copypaste):
int search(int depth)
{
if (!depth)
return evaluate_position();
int best_score = -50000;
int best_so_far;
int legal_moves = 0;
moves move_list[1];
generate_moves(move_list);
for (int count = 0; count < move_list->count; count++) {
copy_board();
ply++;
if(!make_move(move_list->moves[count], all_moves))
{
ply--;
continue;
}
legal_moves++;
int score = -search(depth - 1);
take_back();
if (score > best_score)
{
best_so_far = move_list->moves[count];
best_score = score;
printf("best so far: %s%s score: %d ply: %d\n", square_to_coords[get_move_source(move_list->moves[count])],
square_to_coords[get_move_target(move_list->moves[count])],
best_score, ply);
}
ply--;
}
best_move = best_so_far;
if (!legal_moves)
{
if (is_square_attacked(king_square[side], side ^ 1))
return -49000 + ply;
else
return 0;
}
return best_score;
}
at least this solution ALWAYS returns mate regardless of search depth.
In your last example and the one I just showed the evaluation function is called at depth 0 instead of quiescence.
If you want to have it working properly you also have to check for a mate/stalemate condition in the evaluation/quiescence, which is omitted now.