Caching generated moves list in recursive searches

Discussion of chess software programming and technical issues.

Moderators: hgm, Harvey Williamson, bob

Forum rules
This textbox is used to restore diagrams posted with the [d] tag before the upgrade.
Rein Halbersma
Posts: 685
Joined: Tue May 22, 2007 9:13 am

Caching generated moves list in recursive searches

Post by Rein Halbersma » Sun May 10, 2015 8:25 am

Are there any engines that cache the generated move list? What I mean with that is that during internal iterative deepening, multi-probe cut, PVS and LMR re-searches, the position from which is being re-searched is the same one at the current ply level inside a recursive search() function.

For iterative searchers (I think HGM once wrote that he does not use recursion for IID) there is a direct savings, but I wonder if there are any engines that pass a move list to their search() function. Stockfish e.g. does not use this kind of optimization (except to pass around an excluded move inside its Stack* parameter). The easiest way to implement this would probably be to give search() a pair of pointers to generated moves (or a pointer and a count). I wonder if anyone has ever tried this.

User avatar
hgm
Posts: 23455
Joined: Fri Mar 10, 2006 9:06 am
Location: Amsterdam
Full name: H G Muller
Contact:

Re: Caching generated moves list in recursive searches

Post by hgm » Sun May 10, 2015 9:45 am

Indeed most of my engines don't do any re-searches. Having two calls to search the same node directly after each other (with other window bounds or depth) is just a poor implementation, which does a lot of work in duplicat. So for PVS as well as LMR researches I always use 'self deepening': the window bounds or depth is never messed with in the parent node, but in stead a parameter 'red' is passed to instruct the daughter node how to mess with those. A single parameter can do, as a node is either PV or late move, but never both at the same time. So for late moves I pass the reduction (in ply) the move optionally deserves, or -1 when it was the PV move. When it was not a PV node you start collapsing the window (setting alpha = beta-1), and subtracting the reduction (depth -= red). If the final iteration of the IID loop fails low, and alpha != originalAlpha || red > 0 an extra iteration is done with the original window or depth. Like:

Code: Select all

Search(int origAlpha, int beta, int depth, int red)
{
  int alpha, newRed;
  GenerateMoves();
  if(red > 0) depth -= red;
  for&#40;iterDepth = startDepth; iterDepth <= depth; ) &#123;
    alpha = origAlpha;
    newRed = -&#40;red == -1&#41;;
    if&#40;red >= 0&#41; alpha = beta-1;
    for&#40;ALL_MOVES&#41; &#123;
      MakeMove&#40;);
      score = -Search&#40;-beta, -alpha, iterDepth-1, newRed&#41;;
      UnMake&#40;);
      if&#40;score > alpha&#41; &#123;
        alpha = score;
        if&#40;score > beta&#41; goto cutoff;
      &#125;
      ...
      newRed = 0;
      if&#40;LATEMOVES&#41; newRed = REDUCTION;
    &#125;
    if&#40;red == 0 && origAlpha != beta-1&#41; red = -1;
    else iterDepth = NextDepth&#40;);
    if&#40;red > 0&#41; &#123; depth += red; red = 0; &#125; // preceding late move failed high
   cutoff&#58;
    ...
  &#125;
&#125;

mcostalba
Posts: 2684
Joined: Sat Jun 14, 2008 7:17 pm

Re: Caching generated moves list in recursive searches

Post by mcostalba » Sun May 10, 2015 12:50 pm

Rein Halbersma wrote:Are there any engines that cache the generated move list? What I mean with that is that during internal iterative deepening, multi-probe cut, PVS and LMR re-searches, the position from which is being re-searched is the same one at the current ply level inside a recursive search() function.
In SF move list is allocated on the stack so you can't really do this without copying the moves to a permanent storage.

Maybe you could use a global storage as backup to generate the moves, but this seems quite complex to me, considering also SMP case and the fact that you have to allocate in advance MAX_MOVES * MAX_PLY size storage.

matthewlai
Posts: 791
Joined: Sun Aug 03, 2014 2:48 am
Location: London, UK
Contact:

Re: Caching generated moves list in recursive searches

Post by matthewlai » Sun May 10, 2015 4:20 pm

With efficient move generation (bit extraction from bitboards), I would imagine the move generation speed is only slightly slower than how fast you can read from memory, since calculations can often be interleaved with memory access.

And if you are not careful and get a cache miss, it will be much much slower than normal move generation.
Disclosure: I work for DeepMind on the AlphaZero project, but everything I say here is personal opinion and does not reflect the views of DeepMind / Alphabet.

User avatar
hgm
Posts: 23455
Joined: Fri Mar 10, 2006 9:06 am
Location: Amsterdam
Full name: H G Muller
Contact:

Re: Caching generated moves list in recursive searches

Post by hgm » Sun May 10, 2015 5:15 pm

Surely the probablility for a cacke miss will be much larger when you store the same move list twice in two different places. It doesn't only waste time, it also wastes memory space. Plus that you have to start sorting the move list from scratch when you generate it for a second time.

matthewlai
Posts: 791
Joined: Sun Aug 03, 2014 2:48 am
Location: London, UK
Contact:

Re: Caching generated moves list in recursive searches

Post by matthewlai » Sun May 10, 2015 5:19 pm

hgm wrote:Surely the probablility for a cacke miss will be much larger when you store the same move list twice in two different places. It doesn't only waste time, it also wastes memory space. Plus that you have to start sorting the move list from scratch when you generate it for a second time.
Yes, the both those places are on the stack (usually), and the stack is usually in cache all the time.
Disclosure: I work for DeepMind on the AlphaZero project, but everything I say here is personal opinion and does not reflect the views of DeepMind / Alphabet.

User avatar
hgm
Posts: 23455
Joined: Fri Mar 10, 2006 9:06 am
Location: Amsterdam
Full name: H G Muller
Contact:

Re: Caching generated moves list in recursive searches

Post by hgm » Sun May 10, 2015 5:23 pm

The move stack is just as likely to be in cache as the hardware stack. The caching algorithm doesn't make any distinction between one stack and another.

matthewlai
Posts: 791
Joined: Sun Aug 03, 2014 2:48 am
Location: London, UK
Contact:

Re: Caching generated moves list in recursive searches

Post by matthewlai » Sun May 10, 2015 5:25 pm

hgm wrote:The move stack is just as likely to be in cache as the hardware stack. The caching algorithm doesn't make any distinction between one stack and another.
Ah I had the impression that it would be in a separate hash table for some reason.
Disclosure: I work for DeepMind on the AlphaZero project, but everything I say here is personal opinion and does not reflect the views of DeepMind / Alphabet.

User avatar
hgm
Posts: 23455
Joined: Fri Mar 10, 2006 9:06 am
Location: Amsterdam
Full name: H G Muller
Contact:

Re: Caching generated moves list in recursive searches

Post by hgm » Sun May 10, 2015 6:13 pm

Well, I don't know what exactly Rein had in mind. But the cases he mentions (IID, PVS, LMR) are all cases where a naive implementation would either call the same node, or call a node from which it just returned. In both cases you could pass a pointer to the existing move list on a normal move stack, as it should not be overwritten yet.

Of course it would be better to use the self deepening; that would save you all other overhead involved in starting a new node as well.

syzygy
Posts: 4446
Joined: Tue Feb 28, 2012 10:56 pm

Re: Caching generated moves list in recursive searches

Post by syzygy » Sun May 10, 2015 6:20 pm

How many percent of the nodes are we talking about anyway?

Saving a bit work at no cost is great, but carrying around extra arguments between function calls for saving a minor bit of work in 1% or maybe 0.1% of all nodes will not help.

Post Reply