Q-Search efficiency

Discussion of chess software programming and technical issues.

Moderator: Ras

User avatar
hgm
Posts: 28353
Joined: Fri Mar 10, 2006 10:06 am
Location: Amsterdam
Full name: H G Muller

Re: Q-Search efficiency

Post by hgm »

I guess this idea can be carried even further. Implementations that research a daughter immediately, like

Code: Select all

score = -Search(); // search with reduced depth or aspirated window
if(score > alpha) score = -Search(); // research unreduced or with open window
are of course inefficient, because returning from the first call needlessly has thrown away a lot of information (like the move list), which will have to be reconstructed in the second call (by generating moves a second time, and sorting those again). An efficient implementation would have built the research inside a single Search() call, passing it the information how much extra depth it should search or how much it should lower alpha if it doesn't fail high (i.e. do not pass just the reduced depth and the aspirated alpha, but also the unreduced depth and the true alpha). Then it can just redo the loop over moves using those.

But doing the loop over moves twice in case you don't fail high means that all moves have been searched (and consequently move lists have been created for all daughters). If you want to redo that loop, you would visit the same daughters again. So you would benefit if you had left all their move lists on the move stack, remembering where exactly, and passing that info to the daughters so they could reuse them when they are searched a second time. Also in the case of Internal Iterative Deepening, where you redo the loop over moves several times, you would benefit from hanging on to the move lists of all daughters.

Not only keeping your own move list on the move stack, but also those of all your daughters, would of course drive up the size of the move stack by a factor 40 or so, but with current memory sizes that is hardly an issue.