Never heard of MTD(f), but no it's completely different. The key realization in Alpha-Emanuel-Beta is that we don't try all moves for one side. In pseudo-code it's something like below (assuming mate bound). Hope this makes sense and let me know if something is not clear. This is a game-changer.
long Emanuel;
int hypothesisPlayerToWin;
void iterativeDeepening() {
// Do static evaluation to figure out hypothesisPlayerToWin, and decide on a
// suitable value for EmanuelFactor. We can start multiple search threads here.
long EmanuelFactor = 100; // example
for (int depth = 1; ; ++depth) {
Emanuel = max(historyHeuristics) / EmanuelFactor;
alphaBeta(-INF, INF, depth);
}
}
long alphaBeta(long alpha, long beta, int depthleft) {
if (depthleft == 0) return quiesce(alpha, beta);
movesToTry = allAvailableMoves();
// Here is the genius approach:
if (player == hypothesisPlayerToWin) {
movesToTry = filter(movesToTry, move -> isBestMove(move) || score(move) > Emanuel)
}
for (move in movesToTry) {
score = -alphaBeta(-beta, -alpha, depthleft - 1);
if (score >= beta) return beta;
if (score > alpha) alpha = score;
}
return alpha;
}
[Moderation warning] This signature violated the rule against commercial exhortations.
klx wrote: ↑Thu Sep 02, 2021 5:15 pm
Never heard of MTD(f), but no it's completely different. The key realization in Alpha-Emanuel-Beta is that we don't try all moves for one side. In pseudo-code it's something like below (assuming mate bound). Hope this makes sense and let me know if something is not clear. This is a game-changer.
long Emanuel;
int hypothesisPlayerToWin;
void iterativeDeepening() {
// Do static evaluation to figure out hypothesisPlayerToWin, and decide on a
// suitable value for EmanuelFactor. We can start multiple search threads here.
long EmanuelFactor = 100; // example
for (int depth = 1; ; ++depth) {
Emanuel = max(historyHeuristics) / EmanuelFactor;
alphaBeta(-INF, INF, depth);
}
}
long alphaBeta(long alpha, long beta, int depthleft) {
if (depthleft == 0) return quiesce(alpha, beta);
movesToTry = allAvailableMoves();
// Here is the genius approach:
if (player == hypothesisPlayerToWin) {
movesToTry = filter(movesToTry, move -> isBestMove(move) || score(move) > Emanuel)
}
for (move in movesToTry) {
score = -alphaBeta(-beta, -alpha, depthleft - 1);
if (score >= beta) return beta;
if (score > alpha) alpha = score;
}
return alpha;
}
I just told you that AB is proven to be optimal, despite for sometime SSS* was thought to search for fewer nodes, which it did!
However it is shown that SSS* reformulated to a depth-first search is equivalent to MTD(f) or PVS with tiny aspiration windows.
So there is nothing to improve there, if you understood what a mathematically proven fact is.
If you are talking about unsound prunining techinques (e.g. let search just one move at a ply), then that is a different proposition all in all.
MCTS comes close to the most aggressive pruning techinque, that is proven to converge to minmax with more simulations.
So here is your chance to prove if you would like to engage in productive discussions.
1) How do you implement isBestMove(move). At least it is not defined in your pseudocode.
O wait something like move value > current best move value? Or isBestMove move of previous iteration.
2) max(historyHeuristics). What is historyHeuristics? A list of values I suppose. What are these values.
Why can you skip all moves below gamma = Emanuel?
Last edited by Henk on Thu Sep 02, 2021 6:10 pm, edited 2 times in total.
klx wrote: ↑Thu Sep 02, 2021 5:15 pm
Never heard of MTD(f), but no it's completely different. The key realization in Alpha-Emanuel-Beta is that we don't try all moves for one side. In pseudo-code it's something like below (assuming mate bound). Hope this makes sense and let me know if something is not clear. This is a game-changer.
long Emanuel;
int hypothesisPlayerToWin;
void iterativeDeepening() {
// Do static evaluation to figure out hypothesisPlayerToWin, and decide on a
// suitable value for EmanuelFactor. We can start multiple search threads here.
long EmanuelFactor = 100; // example
for (int depth = 1; ; ++depth) {
Emanuel = max(historyHeuristics) / EmanuelFactor;
alphaBeta(-INF, INF, depth);
}
}
long alphaBeta(long alpha, long beta, int depthleft) {
if (depthleft == 0) return quiesce(alpha, beta);
movesToTry = allAvailableMoves();
// Here is the genius approach:
if (player == hypothesisPlayerToWin) {
movesToTry = filter(movesToTry, move -> isBestMove(move) || score(move) > Emanuel)
}
for (move in movesToTry) {
score = -alphaBeta(-beta, -alpha, depthleft - 1);
if (score >= beta) return beta;
if (score > alpha) alpha = score;
}
return alpha;
}
I just told you that AB is proven to be optimal, despite for sometime SSS* was thought to search for fewer nodes, which it did!
However it is shown that SSS* reformulated to a depth-first search is equivalent to MTD(f) or PVS with tiny aspiration windows.
So there is nothing to improve there, if you understood what a mathematically proven fact is.
If you are talking about unsound prunining techinques (e.g. let search just one move at a ply), then that is a different proposition all in all.
MCTS comes close to the most aggressive pruning techinque, that is proven to converge to minmax with more simulations.
So here is your chance to prove if you would like to engage in productive discussions.
AB is not proven to be optimal. It is however proven to be really bad on some types of trees and so bad that deeper searches yield worse results than shallower searches on average.
klx wrote: ↑Thu Sep 02, 2021 5:15 pm
Never heard of MTD(f), but no it's completely different. The key realization in Alpha-Emanuel-Beta is that we don't try all moves for one side. In pseudo-code it's something like below (assuming mate bound). Hope this makes sense and let me know if something is not clear. This is a game-changer.
long Emanuel;
int hypothesisPlayerToWin;
void iterativeDeepening() {
// Do static evaluation to figure out hypothesisPlayerToWin, and decide on a
// suitable value for EmanuelFactor. We can start multiple search threads here.
long EmanuelFactor = 100; // example
for (int depth = 1; ; ++depth) {
Emanuel = max(historyHeuristics) / EmanuelFactor;
alphaBeta(-INF, INF, depth);
}
}
long alphaBeta(long alpha, long beta, int depthleft) {
if (depthleft == 0) return quiesce(alpha, beta);
movesToTry = allAvailableMoves();
// Here is the genius approach:
if (player == hypothesisPlayerToWin) {
movesToTry = filter(movesToTry, move -> isBestMove(move) || score(move) > Emanuel)
}
for (move in movesToTry) {
score = -alphaBeta(-beta, -alpha, depthleft - 1);
if (score >= beta) return beta;
if (score > alpha) alpha = score;
}
return alpha;
}
I just told you that AB is proven to be optimal, despite for sometime SSS* was thought to search for fewer nodes, which it did!
However it is shown that SSS* reformulated to a depth-first search is equivalent to MTD(f) or PVS with tiny aspiration windows.
So there is nothing to improve there, if you understood what a mathematically proven fact is.
If you are talking about unsound prunining techinques (e.g. let search just one move at a ply), then that is a different proposition all in all.
MCTS comes close to the most aggressive pruning techinque, that is proven to converge to minmax with more simulations.
So here is your chance to prove if you would like to engage in productive discussions.
AB is not proven to be optimal. It is however proven to be really bad on some types of trees and so bad that deeper searches yield worse results than shallower searches on average.
"Emanuel can then be progressively lowered, so that we start with a big Emanuel, and if the hypothesis fails, we try a smaller Emanuel. More specifically then -- how do we choose Emanuel"
Is this something like can I check mate, no can I win a rook no, can I win a pawn .... am I checkmated.
Looks similar like aspiration searches. But a linear search. Better use a binary search or not.
Waiting for your definition of historyHeuristics.
klx wrote: ↑Thu Sep 02, 2021 5:15 pm
Never heard of MTD(f), but no it's completely different. The key realization in Alpha-Emanuel-Beta is that we don't try all moves for one side. In pseudo-code it's something like below (assuming mate bound). Hope this makes sense and let me know if something is not clear. This is a game-changer.
long Emanuel;
int hypothesisPlayerToWin;
void iterativeDeepening() {
// Do static evaluation to figure out hypothesisPlayerToWin, and decide on a
// suitable value for EmanuelFactor. We can start multiple search threads here.
long EmanuelFactor = 100; // example
for (int depth = 1; ; ++depth) {
Emanuel = max(historyHeuristics) / EmanuelFactor;
alphaBeta(-INF, INF, depth);
}
}
long alphaBeta(long alpha, long beta, int depthleft) {
if (depthleft == 0) return quiesce(alpha, beta);
movesToTry = allAvailableMoves();
// Here is the genius approach:
if (player == hypothesisPlayerToWin) {
movesToTry = filter(movesToTry, move -> isBestMove(move) || score(move) > Emanuel)
}
for (move in movesToTry) {
score = -alphaBeta(-beta, -alpha, depthleft - 1);
if (score >= beta) return beta;
if (score > alpha) alpha = score;
}
return alpha;
}
I just told you that AB is proven to be optimal, despite for sometime SSS* was thought to search for fewer nodes, which it did!
However it is shown that SSS* reformulated to a depth-first search is equivalent to MTD(f) or PVS with tiny aspiration windows.
So there is nothing to improve there, if you understood what a mathematically proven fact is.
If you are talking about unsound prunining techinques (e.g. let search just one move at a ply), then that is a different proposition all in all.
MCTS comes close to the most aggressive pruning techinque, that is proven to converge to minmax with more simulations.
So here is your chance to prove if you would like to engage in productive discussions.
AB is not proven to be optimal. It is however proven to be really bad on some types of trees and so bad that deeper searches yield worse results than shallower searches on average.
klx wrote: ↑Thu Sep 02, 2021 5:15 pm
Never heard of MTD(f), but no it's completely different. The key realization in Alpha-Emanuel-Beta is that we don't try all moves for one side. In pseudo-code it's something like below (assuming mate bound). Hope this makes sense and let me know if something is not clear. This is a game-changer.
long Emanuel;
int hypothesisPlayerToWin;
void iterativeDeepening() {
// Do static evaluation to figure out hypothesisPlayerToWin, and decide on a
// suitable value for EmanuelFactor. We can start multiple search threads here.
long EmanuelFactor = 100; // example
for (int depth = 1; ; ++depth) {
Emanuel = max(historyHeuristics) / EmanuelFactor;
alphaBeta(-INF, INF, depth);
}
}
long alphaBeta(long alpha, long beta, int depthleft) {
if (depthleft == 0) return quiesce(alpha, beta);
movesToTry = allAvailableMoves();
// Here is the genius approach:
if (player == hypothesisPlayerToWin) {
movesToTry = filter(movesToTry, move -> isBestMove(move) || score(move) > Emanuel)
}
for (move in movesToTry) {
score = -alphaBeta(-beta, -alpha, depthleft - 1);
if (score >= beta) return beta;
if (score > alpha) alpha = score;
}
return alpha;
}
I just told you that AB is proven to be optimal, despite for sometime SSS* was thought to search for fewer nodes, which it did!
However it is shown that SSS* reformulated to a depth-first search is equivalent to MTD(f) or PVS with tiny aspiration windows.
So there is nothing to improve there, if you understood what a mathematically proven fact is.
If you are talking about unsound prunining techinques (e.g. let search just one move at a ply), then that is a different proposition all in all.
MCTS comes close to the most aggressive pruning techinque, that is proven to converge to minmax with more simulations.
So here is your chance to prove if you would like to engage in productive discussions.
AB is not proven to be optimal. It is however proven to be really bad on some types of trees and so bad that deeper searches yield worse results than shallower searches on average.
That is a strawman, you said "AB is not proven to be optimal", and I gave you a paper written 40 years ago that proves its optimality.
Search pathology has nothing to do with optimality of AB.
I guess this is one of the instances where one has to carefully look at how "optimality" is defined. Without any further information optimality with respect to game-tree search is understood to be defined as in the paper linked by Daniel.