- * The sampling distribution is log-normal, derived from the fact that perft estimate is a product of branching factors.

* Non uniform sampling of moves is better than uniform sampling. This fact becomes clear when the simulations are started from a larger depth say 3. Each of the 20 root moves will be simulated different number of times naturally by a depth-first searcher, and by proportioning by the sub-tree size (number of leaf nodes to be exact) when using best-first searcher. For the latter the proportioning of simulations is not so clear. I have about 5 formulas that i tested. I forgot the specific motive for each formula but in general proportioning by some function of the tree size (assumed proportional to variance). Which one is the best is still unsettled. See below for more.

* Combining depth-first and best-first estimators proved to be better. Also adding a selective layer helped so.* Improvements for the Monte-carlo part includeCode: Select all

`Dynamic tree (different depth) + K full depth + L selective depth + Monte-carlo (exactly 1 move)`

- - proportioning by heuristics (captures etc.)

- 'Anthithetic' variables for variance reduction

- Hashing

- - proportioning by heuristics (captures etc.)

-----------------------------------------------------------------

I have some doubts about proportioning. Let me start from Michel's formula.

I do not remember why p_i depended on the mean value x_i, but thinking about it now from scratch, I get p_i = mu sqrt(sigma_i^2). The problem is that of variance reduction. Say the original variances are Vi=sigma_i^2. Now given N simulations N_i for each, how to proportion to arrive at the minimum variance? With Ni simulations variance will be reduced to Vi/Ni so:if one wants to estimate x_1+...+x_n and

one has unbiased estimators X_i for x_i with variance sigma_i^2 then

if I calculated correctly one should choose X_i with probability

p_i=mu sqrt(x_i^2+sigma_i^2)

where mu should be adjusted such that the sum of the p_i is 1.

Code: Select all

```
Objective: Minimize(V1/N1+V2/N2+...Vn/Nn)
Constraint: N = N1+N2+...+Nn
```

Indeed the proof becomes simple that way. Anyway is the formula with the mean used only the monte-carlo part where we don't save the means? I think for in-tree updates it may not be necessary. Ofcourse the variance may be proportional to tree size indirectly so it has effect the mean has in Michel's formula indirectly.

Daniel