There are two ways to look at this example:Daniel Shawul wrote:It doesn't matter how long you search one variation (be selective) about it. Your nominal depth is the one you started out the search with d. So with this method you are able to compare many different algorithms which if you conjure up some other you can't. Your example is for humans who don't use IID, but if you assume you started out to search 5 plies but went on extending that one line to 10 then your EBF isso it doesn't matter how long you extend one line.Code: Select all
EBF = (10+5+5)^(1/5)
* either you consider that d=10, and aggressive forward pruning methods were used. In computer terms, my brain used move count pruning after 3 moves at the root, and after 1 move at every node visited by the search here. It also used a 1 ply reduction at each node of the 5-ply variation, which are nominally 10 plies, but end up being 5 because of reduction piling up in these two branches.
* or you consider that I did a 5 ply search, and extended just one variation by 5 plies.
In both cases, the formula gives very different results.
The same reasoning could be applied to the qsearch. If the max QS depth is 10 (ie. 10 plies starting from a leaf of the search), we can either consider that depth=d+10, or that the real depth is still d, and these are extensions to be discarded.
I think LMR exposes a way to abuse this (overly basic) definition of EBF. So people go around and parade with their low EBF, and how it has improved frol 5-6 to 1.7 over the years. That's a apple to pear comparison, IMO.
In the past engines used lots of extensions. Nowadays they use non reductions, which is a relative extension...