Alpha-Emanuel-Beta Search -- Enormous Improvement of Alpha-Beta!

Discussion of chess software programming and technical issues.

Moderators: hgm, Rebel, chrisw

klx
Posts: 179
Joined: Tue Jun 15, 2021 8:11 pm
Full name: Emanuel Torres

Re: Alpha-Emanuel-Beta Search -- Enormous Improvement of Alpha-Beta!

Post by klx »

Never heard of MTD(f), but no it's completely different. The key realization in Alpha-Emanuel-Beta is that we don't try all moves for one side. In pseudo-code it's something like below (assuming mate bound). Hope this makes sense and let me know if something is not clear. This is a game-changer.

Code: Select all

long Emanuel;
int hypothesisPlayerToWin;

void iterativeDeepening() {
   // Do static evaluation to figure out hypothesisPlayerToWin, and decide on a
   // suitable value for EmanuelFactor. We can start multiple search threads here.
   long EmanuelFactor = 100; // example
   for (int depth = 1; ; ++depth) {
      Emanuel = max(historyHeuristics) / EmanuelFactor;
      alphaBeta(-INF, INF, depth);
   }
}

long alphaBeta(long alpha, long beta, int depthleft) {
   if (depthleft == 0) return quiesce(alpha, beta);
   
   movesToTry = allAvailableMoves();

   // Here is the genius approach:
   if (player == hypothesisPlayerToWin) {
      movesToTry = filter(movesToTry, move -> isBestMove(move) || score(move) > Emanuel)
   }
   
   for (move in movesToTry)  {
      score = -alphaBeta(-beta, -alpha, depthleft - 1);
      if (score >= beta) return beta;
      if (score > alpha) alpha = score;
   }
   return alpha;
}
[Moderation warning] This signature violated the rule against commercial exhortations.
Daniel Shawul
Posts: 4185
Joined: Tue Mar 14, 2006 11:34 am
Location: Ethiopia

Re: Alpha-Emanuel-Beta Search -- Enormous Improvement of Alpha-Beta!

Post by Daniel Shawul »

klx wrote: Thu Sep 02, 2021 5:15 pm Never heard of MTD(f), but no it's completely different. The key realization in Alpha-Emanuel-Beta is that we don't try all moves for one side. In pseudo-code it's something like below (assuming mate bound). Hope this makes sense and let me know if something is not clear. This is a game-changer.

Code: Select all

long Emanuel;
int hypothesisPlayerToWin;

void iterativeDeepening() {
   // Do static evaluation to figure out hypothesisPlayerToWin, and decide on a
   // suitable value for EmanuelFactor. We can start multiple search threads here.
   long EmanuelFactor = 100; // example
   for (int depth = 1; ; ++depth) {
      Emanuel = max(historyHeuristics) / EmanuelFactor;
      alphaBeta(-INF, INF, depth);
   }
}

long alphaBeta(long alpha, long beta, int depthleft) {
   if (depthleft == 0) return quiesce(alpha, beta);
   
   movesToTry = allAvailableMoves();

   // Here is the genius approach:
   if (player == hypothesisPlayerToWin) {
      movesToTry = filter(movesToTry, move -> isBestMove(move) || score(move) > Emanuel)
   }
   
   for (move in movesToTry)  {
      score = -alphaBeta(-beta, -alpha, depthleft - 1);
      if (score >= beta) return beta;
      if (score > alpha) alpha = score;
   }
   return alpha;
}
I just told you that AB is proven to be optimal, despite for sometime SSS* was thought to search for fewer nodes, which it did!
However it is shown that SSS* reformulated to a depth-first search is equivalent to MTD(f) or PVS with tiny aspiration windows.
So there is nothing to improve there, if you understood what a mathematically proven fact is.

If you are talking about unsound prunining techinques (e.g. let search just one move at a ply), then that is a different proposition all in all.
MCTS comes close to the most aggressive pruning techinque, that is proven to converge to minmax with more simulations.
So here is your chance to prove if you would like to engage in productive discussions.
Henk
Posts: 7220
Joined: Mon May 27, 2013 10:31 am

Re: Alpha-Emanuel-Beta Search -- Enormous Improvement of Alpha-Beta!

Post by Henk »

1) How do you implement isBestMove(move). At least it is not defined in your pseudocode.
O wait something like move value > current best move value? Or isBestMove move of previous iteration.
2) max(historyHeuristics). What is historyHeuristics? A list of values I suppose. What are these values.

Why can you skip all moves below gamma = Emanuel?
Last edited by Henk on Thu Sep 02, 2021 6:10 pm, edited 2 times in total.
smatovic
Posts: 2663
Joined: Wed Mar 10, 2010 10:18 pm
Location: Hamburg, Germany
Full name: Srdja Matovic

Re: Alpha-Emanuel-Beta Search -- Enormous Improvement of Alpha-Beta!

Post by smatovic »

nevermind.
Last edited by smatovic on Thu Sep 02, 2021 6:17 pm, edited 1 time in total.
Pio
Posts: 334
Joined: Sat Feb 25, 2012 10:42 pm
Location: Stockholm

Re: Alpha-Emanuel-Beta Search -- Enormous Improvement of Alpha-Beta!

Post by Pio »

Daniel Shawul wrote: Thu Sep 02, 2021 5:45 pm
klx wrote: Thu Sep 02, 2021 5:15 pm Never heard of MTD(f), but no it's completely different. The key realization in Alpha-Emanuel-Beta is that we don't try all moves for one side. In pseudo-code it's something like below (assuming mate bound). Hope this makes sense and let me know if something is not clear. This is a game-changer.

Code: Select all

long Emanuel;
int hypothesisPlayerToWin;

void iterativeDeepening() {
   // Do static evaluation to figure out hypothesisPlayerToWin, and decide on a
   // suitable value for EmanuelFactor. We can start multiple search threads here.
   long EmanuelFactor = 100; // example
   for (int depth = 1; ; ++depth) {
      Emanuel = max(historyHeuristics) / EmanuelFactor;
      alphaBeta(-INF, INF, depth);
   }
}

long alphaBeta(long alpha, long beta, int depthleft) {
   if (depthleft == 0) return quiesce(alpha, beta);
   
   movesToTry = allAvailableMoves();

   // Here is the genius approach:
   if (player == hypothesisPlayerToWin) {
      movesToTry = filter(movesToTry, move -> isBestMove(move) || score(move) > Emanuel)
   }
   
   for (move in movesToTry)  {
      score = -alphaBeta(-beta, -alpha, depthleft - 1);
      if (score >= beta) return beta;
      if (score > alpha) alpha = score;
   }
   return alpha;
}
I just told you that AB is proven to be optimal, despite for sometime SSS* was thought to search for fewer nodes, which it did!
However it is shown that SSS* reformulated to a depth-first search is equivalent to MTD(f) or PVS with tiny aspiration windows.
So there is nothing to improve there, if you understood what a mathematically proven fact is.

If you are talking about unsound prunining techinques (e.g. let search just one move at a ply), then that is a different proposition all in all.
MCTS comes close to the most aggressive pruning techinque, that is proven to converge to minmax with more simulations.
So here is your chance to prove if you would like to engage in productive discussions.
AB is not proven to be optimal. It is however proven to be really bad on some types of trees and so bad that deeper searches yield worse results than shallower searches on average.
Daniel Shawul
Posts: 4185
Joined: Tue Mar 14, 2006 11:34 am
Location: Ethiopia

Re: Alpha-Emanuel-Beta Search -- Enormous Improvement of Alpha-Beta!

Post by Daniel Shawul »

Pio wrote: Thu Sep 02, 2021 6:14 pm
Daniel Shawul wrote: Thu Sep 02, 2021 5:45 pm
klx wrote: Thu Sep 02, 2021 5:15 pm Never heard of MTD(f), but no it's completely different. The key realization in Alpha-Emanuel-Beta is that we don't try all moves for one side. In pseudo-code it's something like below (assuming mate bound). Hope this makes sense and let me know if something is not clear. This is a game-changer.

Code: Select all

long Emanuel;
int hypothesisPlayerToWin;

void iterativeDeepening() {
   // Do static evaluation to figure out hypothesisPlayerToWin, and decide on a
   // suitable value for EmanuelFactor. We can start multiple search threads here.
   long EmanuelFactor = 100; // example
   for (int depth = 1; ; ++depth) {
      Emanuel = max(historyHeuristics) / EmanuelFactor;
      alphaBeta(-INF, INF, depth);
   }
}

long alphaBeta(long alpha, long beta, int depthleft) {
   if (depthleft == 0) return quiesce(alpha, beta);
   
   movesToTry = allAvailableMoves();

   // Here is the genius approach:
   if (player == hypothesisPlayerToWin) {
      movesToTry = filter(movesToTry, move -> isBestMove(move) || score(move) > Emanuel)
   }
   
   for (move in movesToTry)  {
      score = -alphaBeta(-beta, -alpha, depthleft - 1);
      if (score >= beta) return beta;
      if (score > alpha) alpha = score;
   }
   return alpha;
}
I just told you that AB is proven to be optimal, despite for sometime SSS* was thought to search for fewer nodes, which it did!
However it is shown that SSS* reformulated to a depth-first search is equivalent to MTD(f) or PVS with tiny aspiration windows.
So there is nothing to improve there, if you understood what a mathematically proven fact is.

If you are talking about unsound prunining techinques (e.g. let search just one move at a ply), then that is a different proposition all in all.
MCTS comes close to the most aggressive pruning techinque, that is proven to converge to minmax with more simulations.
So here is your chance to prove if you would like to engage in productive discussions.
AB is not proven to be optimal. It is however proven to be really bad on some types of trees and so bad that deeper searches yield worse results than shallower searches on average.
Were you in a cave ? Judea pearl proved optimality of alpha-beta 40 years ago
https://dl.acm.org/doi/10.1145/358589.358616
Henk
Posts: 7220
Joined: Mon May 27, 2013 10:31 am

Re: Alpha-Emanuel-Beta Search -- Enormous Improvement of Alpha-Beta!

Post by Henk »

"Emanuel can then be progressively lowered, so that we start with a big Emanuel, and if the hypothesis fails, we try a smaller Emanuel. More specifically then -- how do we choose Emanuel"

Is this something like can I check mate, no can I win a rook no, can I win a pawn .... am I checkmated.
Looks similar like aspiration searches. But a linear search. Better use a binary search or not.
Waiting for your definition of historyHeuristics.
Pio
Posts: 334
Joined: Sat Feb 25, 2012 10:42 pm
Location: Stockholm

Re: Alpha-Emanuel-Beta Search -- Enormous Improvement of Alpha-Beta!

Post by Pio »

Daniel Shawul wrote: Thu Sep 02, 2021 6:21 pm
Pio wrote: Thu Sep 02, 2021 6:14 pm
Daniel Shawul wrote: Thu Sep 02, 2021 5:45 pm
klx wrote: Thu Sep 02, 2021 5:15 pm Never heard of MTD(f), but no it's completely different. The key realization in Alpha-Emanuel-Beta is that we don't try all moves for one side. In pseudo-code it's something like below (assuming mate bound). Hope this makes sense and let me know if something is not clear. This is a game-changer.

Code: Select all

long Emanuel;
int hypothesisPlayerToWin;

void iterativeDeepening() {
   // Do static evaluation to figure out hypothesisPlayerToWin, and decide on a
   // suitable value for EmanuelFactor. We can start multiple search threads here.
   long EmanuelFactor = 100; // example
   for (int depth = 1; ; ++depth) {
      Emanuel = max(historyHeuristics) / EmanuelFactor;
      alphaBeta(-INF, INF, depth);
   }
}

long alphaBeta(long alpha, long beta, int depthleft) {
   if (depthleft == 0) return quiesce(alpha, beta);
   
   movesToTry = allAvailableMoves();

   // Here is the genius approach:
   if (player == hypothesisPlayerToWin) {
      movesToTry = filter(movesToTry, move -> isBestMove(move) || score(move) > Emanuel)
   }
   
   for (move in movesToTry)  {
      score = -alphaBeta(-beta, -alpha, depthleft - 1);
      if (score >= beta) return beta;
      if (score > alpha) alpha = score;
   }
   return alpha;
}
I just told you that AB is proven to be optimal, despite for sometime SSS* was thought to search for fewer nodes, which it did!
However it is shown that SSS* reformulated to a depth-first search is equivalent to MTD(f) or PVS with tiny aspiration windows.
So there is nothing to improve there, if you understood what a mathematically proven fact is.

If you are talking about unsound prunining techinques (e.g. let search just one move at a ply), then that is a different proposition all in all.
MCTS comes close to the most aggressive pruning techinque, that is proven to converge to minmax with more simulations.
So here is your chance to prove if you would like to engage in productive discussions.
AB is not proven to be optimal. It is however proven to be really bad on some types of trees and so bad that deeper searches yield worse results than shallower searches on average.
Were you in a cave ? Judea pearl proved optimality of alpha-beta 40 years ago
https://dl.acm.org/doi/10.1145/358589.358616
Just google “search pathology”
Daniel Shawul
Posts: 4185
Joined: Tue Mar 14, 2006 11:34 am
Location: Ethiopia

Re: Alpha-Emanuel-Beta Search -- Enormous Improvement of Alpha-Beta!

Post by Daniel Shawul »

Pio wrote: Thu Sep 02, 2021 6:33 pm
Daniel Shawul wrote: Thu Sep 02, 2021 6:21 pm
Pio wrote: Thu Sep 02, 2021 6:14 pm
Daniel Shawul wrote: Thu Sep 02, 2021 5:45 pm
klx wrote: Thu Sep 02, 2021 5:15 pm Never heard of MTD(f), but no it's completely different. The key realization in Alpha-Emanuel-Beta is that we don't try all moves for one side. In pseudo-code it's something like below (assuming mate bound). Hope this makes sense and let me know if something is not clear. This is a game-changer.

Code: Select all

long Emanuel;
int hypothesisPlayerToWin;

void iterativeDeepening() {
   // Do static evaluation to figure out hypothesisPlayerToWin, and decide on a
   // suitable value for EmanuelFactor. We can start multiple search threads here.
   long EmanuelFactor = 100; // example
   for (int depth = 1; ; ++depth) {
      Emanuel = max(historyHeuristics) / EmanuelFactor;
      alphaBeta(-INF, INF, depth);
   }
}

long alphaBeta(long alpha, long beta, int depthleft) {
   if (depthleft == 0) return quiesce(alpha, beta);
   
   movesToTry = allAvailableMoves();

   // Here is the genius approach:
   if (player == hypothesisPlayerToWin) {
      movesToTry = filter(movesToTry, move -> isBestMove(move) || score(move) > Emanuel)
   }
   
   for (move in movesToTry)  {
      score = -alphaBeta(-beta, -alpha, depthleft - 1);
      if (score >= beta) return beta;
      if (score > alpha) alpha = score;
   }
   return alpha;
}
I just told you that AB is proven to be optimal, despite for sometime SSS* was thought to search for fewer nodes, which it did!
However it is shown that SSS* reformulated to a depth-first search is equivalent to MTD(f) or PVS with tiny aspiration windows.
So there is nothing to improve there, if you understood what a mathematically proven fact is.

If you are talking about unsound prunining techinques (e.g. let search just one move at a ply), then that is a different proposition all in all.
MCTS comes close to the most aggressive pruning techinque, that is proven to converge to minmax with more simulations.
So here is your chance to prove if you would like to engage in productive discussions.
AB is not proven to be optimal. It is however proven to be really bad on some types of trees and so bad that deeper searches yield worse results than shallower searches on average.
Were you in a cave ? Judea pearl proved optimality of alpha-beta 40 years ago
https://dl.acm.org/doi/10.1145/358589.358616
Just google “search pathology”
That is a strawman, you said "AB is not proven to be optimal", and I gave you a paper written 40 years ago that proves its optimality.
Search pathology has nothing to do with optimality of AB.
R. Tomasi
Posts: 307
Joined: Wed Sep 01, 2021 4:08 pm
Location: Germany
Full name: Roland Tomasi

Re: Alpha-Emanuel-Beta Search -- Enormous Improvement of Alpha-Beta!

Post by R. Tomasi »

I guess this is one of the instances where one has to carefully look at how "optimality" is defined. Without any further information optimality with respect to game-tree search is understood to be defined as in the paper linked by Daniel.