Delayed Fullwidth Search

Discussion of chess software programming and technical issues.

Moderators: bob, hgm, Harvey Williamson

Forum rules
This textbox is used to restore diagrams posted with the [d] tag before the upgrade.
User avatar
Desperado
Posts: 638
Joined: Mon Dec 15, 2008 10:45 am

Delayed Fullwidth Search

Post by Desperado » Fri Apr 29, 2011 8:59 pm

Hello everyone !

Code: Select all


NORMAL ITERATION PROCESS (EBF==2,base 16):
==========================================

iteration 5:[16 32  64 128 256]
iteration 6: 16 32  64 128 256 512
iteration 7: 16 32  64 128 256 512 1024
iteration 8: 16 32  64 128 256 512 1024 2048
iteration 9: 16 32  64 128 256 512 1024 2048 4096

example: linear reduction scheme like -> 

   newdepth = depth - 1;


DELAYED FULLWIDTH SEARCH&#40;EBF<=2,base 16&#41;&#58;
==========================================

iteration 5&#58;  4  8  16  32  64
iteration 6&#58;  8 16  32  64 128  256
iteration 7&#58; 10 20  40  80 160  320 640
iteration 8&#58; 12 24  48  96 192  384 768  1536
iteration 9&#58;&#91;16 32  64 128 256&#93; 512 1024 2048 4096  
iteration10&#58; 16 32 ...

 -> base reached,width cannot "grow" further , ebf==2
 -> gain until ebf&#40;growing with iteration&#41; becomes upper limit
 -> quantity + quality raises with iteration depending on "WIDTH" until limit.

example&#58; nonLinear reduction scheme like -> 

   newdepth = depth - &#40;depth<7 ? bsr64&#40;movecount&#41; &#58; 1&#41;;


GOAL&#58;
==========================================

based on the assumption of well ordered trees, "Delayed Fullwidth Search"
&#40;like alphaBeta&#41; will drastically gain plies, and also strength.
The goal of course is to maximize the delay, which finally depends strongly
on the reduction scheme &#40;i am not talking of pruning,reduction techniques&#41;,
but how to "general" explore the tree.


@Bob
So, my first question goes to Bob (take a deep breath pls :lol: ).

You strongly refuse in the other thread that _widening_ search
can be a search mechanism. I dont agree because of the examples
given above. If you could be so nice to explain with the above given
numbers and arguments what i and maybe some others dont see ?!

@Marco

Well i am playing around with this in my engine Nemo, with mixed feelings,since months.
First i would like to know if i am speaking of the same thing
as you do, in the context of "widening".

thx,Michael

bob
Posts: 20813
Joined: Mon Feb 27, 2006 6:30 pm
Location: Birmingham, AL

Re: Delayed Fullwidth Search

Post by bob » Fri Apr 29, 2011 10:27 pm

Desperado wrote:Hello everyone !

Code: Select all


NORMAL ITERATION PROCESS &#40;EBF==2,base 16&#41;&#58;
==========================================

iteration 5&#58;&#91;16 32  64 128 256&#93;
iteration 6&#58; 16 32  64 128 256 512
iteration 7&#58; 16 32  64 128 256 512 1024
iteration 8&#58; 16 32  64 128 256 512 1024 2048
iteration 9&#58; 16 32  64 128 256 512 1024 2048 4096

example&#58; linear reduction scheme like -> 

   newdepth = depth - 1;


DELAYED FULLWIDTH SEARCH&#40;EBF<=2,base 16&#41;&#58;
==========================================

iteration 5&#58;  4  8  16  32  64
iteration 6&#58;  8 16  32  64 128  256
iteration 7&#58; 10 20  40  80 160  320 640
iteration 8&#58; 12 24  48  96 192  384 768  1536
iteration 9&#58;&#91;16 32  64 128 256&#93; 512 1024 2048 4096  
iteration10&#58; 16 32 ...

 -> base reached,width cannot "grow" further , ebf==2
 -> gain until ebf&#40;growing with iteration&#41; becomes upper limit
 -> quantity + quality raises with iteration depending on "WIDTH" until limit.

example&#58; nonLinear reduction scheme like -> 

   newdepth = depth - &#40;depth<7 ? bsr64&#40;movecount&#41; &#58; 1&#41;;


GOAL&#58;
==========================================

based on the assumption of well ordered trees, "Delayed Fullwidth Search"
&#40;like alphaBeta&#41; will drastically gain plies, and also strength.
The goal of course is to maximize the delay, which finally depends strongly
on the reduction scheme &#40;i am not talking of pruning,reduction techniques&#41;,
but how to "general" explore the tree.


@Bob
So, my first question goes to Bob (take a deep breath pls :lol: ).

You strongly refuse in the other thread that _widening_ search
can be a search mechanism. I dont agree because of the examples
given above. If you could be so nice to explain with the above given
numbers and arguments what i and maybe some others dont see ?!

@Marco

Well i am playing around with this in my engine Nemo, with mixed feelings,since months.
First i would like to know if i am speaking of the same thing
as you do, in the context of "widening".

thx,Michael
My point is that this is not about "widening" the tree. If you widen the tree how can you also go deeper? In the old days (circa 1960's, ala' Greenblatt) workable programs used a plausible move generator, and tossed out moves at each node based on the depth. For example, search 16 at ply 1, 12 at plies 2 and 3, 8 and plies 4 and 5, and 4 at depth 6. I'd call that a narrow tree that relies _heavily_ on forward pruning. Now what if you want to go to depth 7? You have to reduce those numbers if you want another ply.

Taking the current discussion about a (say) 20 ply search where you prune only in the last 4 plies, and when you go to the next ply, yes the 17th ply "widens" a bit. Because now you are pruning at 18-21, rather than 17-20. But that is not the "width" one sees in tree searching. Width is more naturally associated with EBF. And the EBF is not growing as depth grows, or you would see a serious limit when your EBF reaches 5 or whatever. So if EBF is constant from depth 1 to depth 24, why would one want to call that tree "getting wider with depth." I would characterize it as "getting more accurate with depth" which is actually the point.

The tree _always_ gets wider if you try to draw it, as you go deeper. Each succeeding ply doubles the size while only adding one additional ply. Whether you add those extra nodes closer to the root, or closer to the tips is a function of your pruning and reduction ideas. With an EBF of 2.0, clearly the tree is getting 2x "wider" as you advance. If it gets no "wider" you suddenly have an O(1) searcher. You just push each of the specific set of positions you can search out to some huge depth... And the deeper you go the more error you incur.

User avatar
Desperado
Posts: 638
Joined: Mon Dec 15, 2008 10:45 am

Re: Delayed Fullwidth Search

Post by Desperado » Fri Apr 29, 2011 11:27 pm

My point is that this is not about "widening" the tree. ...
Well terminology is confusing of course. Because the tree itself to
a fixed depth is not widened, just explored more accurate,but depending
on depth.

If you widen the tree how can you also go deeper?
imho, this is one point you miss. we _are already_ deeper.
if you compare the given examples, ply 8 will be reached(finished) faster.
the more softly the transition over the plies is (compared to depth<n condition)
the more powerful the ratio between accuracy/gainOfPlies may be.
And the EBF is not growing as depth grows, or you would see a serious limit when your EBF reaches 5 or whatever. So if EBF is constant from depth 1 to depth 24 ...
here, my point is the following. EBF is growing, but
not unlimited.
it is getting closer and closer to the "real" EBF.
once again, above you can see that depth 5-8 has EBF < 2, from depth 9
on it is 2 (let s assume the searcher searchs with EBF of 2)
I would characterize it as "getting more accurate with depth" which is actually the point.
that is definitely true, but certainly not all to say about this mechanism.
This can be also viewed as verifcation of the _inner tree_ , while we
already examine a more complex problem (search space).

Michael

bob
Posts: 20813
Joined: Mon Feb 27, 2006 6:30 pm
Location: Birmingham, AL

Re: Delayed Fullwidth Search

Post by bob » Sat Apr 30, 2011 2:10 am

Desperado wrote:
My point is that this is not about "widening" the tree. ...
Well terminology is confusing of course. Because the tree itself to
a fixed depth is not widened, just explored more accurate,but depending
on depth.
Depends on your definition of "widening" and "deepening". If deepening is top to bottom, and width is left-to-right, wouldn't you have to agree that the shape of the tree continually gets wider as it gets deeper? You can play games by reducing the widening near the tips via pruning, but all that does is delay the widening. For example, rather than the dot plot of the tree looking like a triangle with the vertex pointing up and representing the root, while with LMR it is more like a pair of triangles, bases touching, with one vertex at the root, and the other vertex at the deepest tip branch touched. But here's the key. The area of either has to be the same, because we have a fixed search space. One goes deeper but more speculatively. The other goes less deeply but highly accurately for what it can "see".

If you widen the tree how can you also go deeper?
imho, this is one point you miss. we _are already_ deeper.
if you compare the given examples, ply 8 will be reached(finished) faster.
the more softly the transition over the plies is (compared to depth<n condition)
the more powerful the ratio between accuracy/gainOfPlies may be.
I am not talking iteration-by-iteration. I am talking about "drop-dead terminate the search we are out of time". I have always agreed that as we go deeper, we gain accuracy. If you talk purely about plies, the LMR/pruning plies are worth less than the full-width plies. But we can get so many more of those LMR/futility plies that there is a gain. So no argument. Deeper == more accurate. For _any_ tree search anyone is using in chess...
And the EBF is not growing as depth grows, or you would see a serious limit when your EBF reaches 5 or whatever. So if EBF is constant from depth 1 to depth 24 ...
here, my point is the following. EBF is growing, but
not unlimited.
it is getting closer and closer to the "real" EBF.
once again, above you can see that depth 5-8 has EBF < 2, from depth 9
on it is 2 (let s assume the searcher searchs with EBF of 2)
I have measured this many times, over thousands of positions, iteration by iteration. I do not see this change in EBF. It certainly goes haywire when changing our minds a lot, because move ordering is broken and than runs EBF up signicantly. It is sometimes much lower than expected, when move ordering is nearly perfect as in some endgames. But overall, for a given position, I don't see it start low and go high...

Here are some simple iteration-by-iteration times from a longish time control (60 10) game played on icc:

.37, .62, .84, 1.5, 3.61, 6.96,
8.06, 12.72, 38.56, 43.0,
0.29, 0.45, 1.93, 1.52, 7.03, 4.32, 8.9, 18.11, 48.0, 71.0

The EBF numbers for that last case work out like this:

1.5, 4.3, 0.79, 4.6, 0.6, 2.1, 2.0, 2.7, 1.47

Doesn't quite "start small and get bigger" to my cursory examination. If you want more data, I can email you the entire game. Might be better to play one long game with noise set to 0 so that all the iterations are displayed. The early EBFs are no real good because I only measure time to 0.01 second resolution and early iterations have a lot of 0.00 type times or very small times that are more noise than real...

The first move is a pretty shallow "find a move to ponder search". The last one is for depths 15-25. Those times are the times required to finish an iteration, defined as T = time(current) - time(previous) where the times are the cumulative times printed at the end of each iteration as in this:

Code: Select all

              depth   time  score   variation &#40;1&#41;
               15     0.57   0.20   15. ... Bd5 16. Qxc3 Bxc5 17. Be3 O-O
                                    18. a5 Qe7 19. Rfc1 Bxe3 20. Qxe3 Rfb8
                                    21. Rab1 Rxb1 22. Rxb1 Nc5
               15->   0.63   0.20   15. ... Bd5 16. Qxc3 Bxc5 17. Be3 O-O
                                    18. a5 Qe7 19. Rfc1 Bxe3 20. Qxe3 Rfb8
                                    21. Rab1 Rxb1 22. Rxb1 Nc5
               16     0.82   0.16   15. ... Bd5 16. Qxc3 Bxc5 17. Rb1 O-O
                                    18. Rd1 Rc8 19. Qe1 Rc6 20. Bf4 Qc7
                                    21. Qf1 a5 22. Bb5 Bxf3 23. gxf3
               16->   0.92   0.16   15. ... Bd5 16. Qxc3 Bxc5 17. Rb1 O-O
                                    18. Rd1 Rc8 19. Qe1 Rc6 20. Bf4 Qc7
                                    21. Qf1 a5 22. Bb5 Bxf3 23. gxf3
               17     1.19   0.15   15. ... Bd5 16. Qxc3 Bxc5 17. Rb1 O-O
                                    18. Rd1 Rc8 19. Qe1 Qc7 20. Bf4 a5
                                    21. Bd3 Bxf3 22. gxf3 Rfd8 23. Qe4
                                    Kf8
               17->   1.37   0.15   15. ... Bd5 16. Qxc3 Bxc5 17. Rb1 O-O
                                    18. Rd1 Rc8 19. Qe1 Qc7 20. Bf4 a5
                                    21. Bd3 Bxf3 22. gxf3 Rfd8 23. Qe4
                                    Kf8
               18     1.82   0.19   15. ... Bd5 16. Qxc3 Bxc5 17. Rb1 O-O
                                    18. Rd1 Rc8 19. Qd3 Be7 20. Qxa6 Ra8
                                    21. Qb5 Ra5 22. Qb2 Rxa4 23. Be3 Ra2
                                    24. Qb5
               18->   3.30   0.19   15. ... Bd5 16. Qxc3 Bxc5 17. Rb1 O-O
                                    18. Rd1 Rc8 19. Qd3 Be7 20. Qxa6 Ra8
                                    21. Qb5 Ra5 22. Qb2 Rxa4 23. Be3 Ra2
                                    24. Qb5 &#40;s=3&#41;
               19     4.06   0.12   15. ... Bd5 16. Qxc3 Bxc5 17. Rb1 O-O
                                    18. Rd1 Rc8 19. Qd3 Be7 20. Qxa6 Ra8
                                    21. Qb5 Ra5 22. Qb2 Rxa4 23. Be3 Ra2
                                    24. Qb5 Nc5 25. Bxc5 Bxc5 26. Qxc5
                                    Rxe2 &#40;s=2&#41;
               19->   4.82   0.12   15. ... Bd5 16. Qxc3 Bxc5 17. Rb1 O-O
                                    18. Rd1 Rc8 19. Qd3 Be7 20. Qxa6 Ra8
                                    21. Qb5 Ra5 22. Qb2 Rxa4 23. Be3 Ra2
                                    24. Qb5 Nc5 25. Bxc5 Bxc5 26. Qxc5
                                    Rxe2 &#40;s=2&#41;
               19->   4.82   0.12   15. ... Bd5 16. Qxc3 Bxc5 17. Rb1 O-O
                                    18. Rd1 Rc8 19. Qd3 Be7 20. Qxa6 Ra8
                                    21. Qb5 Ra5 22. Qb2 Rxa4 23. Be3 Ra2
                                    24. Qb5 Nc5 25. Bxc5 Bxc5 26. Qxc5
                                    Rxe2
               20     9.57   0.16   15. ... Bd5 16. Qxc3 Bxc5 17. Be3 O-O
                                    18. Rfc1 Bxe3 19. Qxe3 Qa5 20. Qc3
                                    Qxc3 21. Rxc3 Ra7 22. Re1 Re8 23. Kh1
                                    Rb8 24. a5 Kh8 25. Bc4
               20->  11.80   0.16   15. ... Bd5 16. Qxc3 Bxc5 17. Be3 O-O
                                    18. Rfc1 Bxe3 19. Qxe3 Qa5 20. Qc3
                                    Qxc3 21. Rxc3 Ra7 22. Re1 Re8 23. Kh1
                                    Rb8 24. a5 Kh8 25. Bc4
               21    13.64   0.15   15. ... Bd5 16. Qxc3 Bxc5 17. Be3 O-O
                                    18. Rfc1 Bxe3 19. Qxe3 Qa5 20. Qc3
                                    Qxc3 21. Rxc3 Ra7 22. Re1 Re8 23. Kh1
                                    Rb8 24. a5 Kh8 25. Kg1 Kg8
               21->  16.12   0.15   15. ... Bd5 16. Qxc3 Bxc5 17. Be3 O-O
                                    18. Rfc1 Bxe3 19. Qxe3 Qa5 20. Qc3
                                    Qxc3 21. Rxc3 Ra7 22. Re1 Re8 23. Kh1
                                    Rb8 24. a5 Kh8 25. Kg1 Kg8
               22    20.38   0.11   15. ... Bd5 16. Qxc3 Bxc5 17. Be3 O-O
                                    18. Rfc1 Bxe3 19. Qxe3 Qa5 20. Qc3
                                    Qxc3 21. Rxc3 Ra7 22. Re1 Re8 23. Kh1
                                    Rd8 24. a5 Rb8 25. h3 Be4 26. Bc4 
               22->  25.02   0.11   15. ... Bd5 16. Qxc3 Bxc5 17. Be3 O-O
                                    18. Rfc1 Bxe3 19. Qxe3 Qa5 20. Qc3
                                    Qxc3 21. Rxc3 Ra7 22. Re1 Re8 23. Kh1
                                    Rd8 24. a5 Rb8 25. h3 Be4 26. Bc4 
                                    18. Rfc1 Bxe3 19. Qxe3 Qa5 20. Qc3
                                    Qxc3 21. Rxc3 Ra7 22. Re1 Re8 23. Kh1
                                    Rd8 24. a5 Kf8 25. h3 Rb8 26. Bf1 Kg8
                                    27. Bc4
               24->   1&#58;31   0.12   15. ... Bd5 16. Qxc3 Bxc5 17. Be3 O-O
                                    18. Rfc1 Bxe3 19. Qxe3 Qa5 20. Qc3
                                    Qxc3 21. Rxc3 Ra7 22. Re1 Re8 23. Kh1
                                    Rd8 24. a5 Kf8 25. h3 Rb8 26. Bf1 Kg8
                                    27. Bc4 &#40;s=4&#41;
               25     1&#58;52   0.16   15. ... Bd5 16. Qxc3 Bxc5 17. Be3 O-O
                                    18. Rfc1 Bxe3 19. Qxe3 Qa5 20. Qc3
                                    Qxc3 21. Rxc3 Ra7 22. Re1 Re8 23. Kh1
                                    Rd8 24. a5 Kf8 25. h4 Rb8 26. h5 Kg8
                                    27. Kh2 Kh8 &#40;s=3&#41;
               25->   2&#58;42   0.16   15. ... Bd5 16. Qxc3 Bxc5 17. Be3 O-O
                                    18. Rfc1 Bxe3 19. Qxe3 Qa5 20. Qc3
                                    Qxc3 21. Rxc3 Ra7 22. Re1 Re8 23. Kh1
                                    Rd8 24. a5 Kf8 25. h4 Rb8 26. h5 Kg8
                                    27. Kh2 Kh8 &#40;s=4&#41;


I would characterize it as "getting more accurate with depth" which is actually the point.
that is definitely true, but certainly not all to say about this mechanism.
This can be also viewed as verifcation of the _inner tree_ , while we
already examine a more complex problem (search space).

Is that not true of _any_ tree, pruned or not, reduced or not? You add another layer of moves on top and the deeper you progress below the root, the more accurately you refine the root score and PV. So I don't see this "revolutionary widening" idea. Nothing is new. The shape of the tree is certainly different, bug going a ply deeper helps any tree search.

Michael

User avatar
Desperado
Posts: 638
Joined: Mon Dec 15, 2008 10:45 am

Re: Delayed Fullwidth Search

Post by Desperado » Sat Apr 30, 2011 7:41 am


Depends on your definition of "widening" and "deepening". If deepening is top to bottom, and width is left-to-right, wouldn't you have to agree that the shape of the tree continually gets wider as it gets deeper? ...
I agree with you. But let me name this basic type of widening "type1" widening.
We will also widen the tree, not only because adding a layer, we also
widening because of using more width within the sublayers. i call this
now just for discussion "type2" widening.
but all that does is delay the widening.
Exactly, it does. Using tye2-widening delays it more aggressivly.
We all use pruning and reduction techniques to achieve this. So we all
_shape_ the tree until it gets very unbalanced. So why dont doing this
by default ? just changing a linear exploring formular with a more
aggressive one ?

_All_ strong engines meanwhile delay,delay,delay .... because of pruning,
reduction techniques, _with success_. So for the moment imho this
is the way to go in future.
I am not talking iteration-by-iteration. I am talking about "drop-dead terminate the search we are out of time". I have always agreed that as we go deeper, we gain accuracy. If you talk purely about plies, the LMR/pruning plies are worth less than the full-width plies. But we can get so many more of those LMR/futility plies that there is a gain. So no argument. Deeper == more accurate. For _any_ tree search anyone is using in chess...
indeed, lmr-plies are worth less than full-width plies. like _delayed-plies_
are worth less than full-width plies, until a delayed ply becomes a full-width ply.
I have measured this many times, over thousands of positions, iteration by iteration. I do not see this change in EBF. ... The early EBFs are no real good ...
maybe i can argue like that: just replace your exploring formular
newdepth=depth-1, with another formular of your choice, maybe like
given above. just for fun make it very aggressive. What happens now?
Without measuring, i can see that i will reach ply, let s say 20 instead of
14 within a second. And the LMP (leftMostPaths) are close to the same.
6 plies within a second means a EBF definetely smaller until a given depth.
This may change from ply 20 on, where delayed-EBF(growing) comes closer to the the EBF.
The shape of the tree is certainly different, bug going a ply deeper helps any tree search.
This is indeed a term which is very useful for discussion, "treeShape".
This is exactly the description i used within my engine, playging around with it.
certainly we do not need to explore the tree with the intention to search
a _balanced_ tree according to depth. No engine does this today anymore.
So we may easily start with another treeShape which widens the tree
with type1-widening and widens the tree with type2-widening.
The thing is, that type 2 widening will not hurt the EBF, because it is hidden
in the sublayers.

regards, Michael

mcostalba
Posts: 2684
Joined: Sat Jun 14, 2008 7:17 pm

Re: Delayed Fullwidth Search

Post by mcostalba » Sat Apr 30, 2011 10:06 am

Desperado wrote: _All_ strong engines meanwhile delay,delay,delay .... because of pruning,
reduction techniques, _with success_. So for the moment imho this
is the way to go in future.
IMHO the optimal compromise that engine aim to reach is a statistical rule that more or less says, for instance:

Code: Select all

Given the time to search a move, it is better to use it to search the 1th ordered move in a node at ply 24 or to search the 15th ordered move in a node at ply 15 ?
In more formal words, given a search tree at a given point in time, the optimal choice for the next move to search is the one that maximizes the probability to change best move at root.

If delaying the widening, as described above, turn out successful then it means that the best move to search on average is not the first (or among the first) of the deepest node, but could be a middle move in an upper node.

The best engine is the one that in any given moment is able to search the move that maximized the probability of changing the root's best move.

Further elaborating the concept, if we assume that to change best move at root we need a cut-off somewhere then we have to understand what is the most influencing cut-off.

It is clear that searching first moves of a node give an higher probability to produce a cut-off then searching say a late move. But on the other hand it is also clear that there is an higher probability to change root's best move if the cutoff is at higher depths. So the probability to change root's best move is the product of two terms:

Code: Select all

Pr = Pc * Pp
Where Pc is the probability to have a cut-off and for a given node decreases with move count.

Instead Pp is the probability that a cut-off at ply p is able to change root's best and this decreases with increasing ply.

mcostalba
Posts: 2684
Joined: Sat Jun 14, 2008 7:17 pm

Re: Delayed Fullwidth Search

Post by mcostalba » Sat Apr 30, 2011 10:28 am

Desperado wrote:

Code: Select all

DELAYED FULLWIDTH SEARCH&#40;EBF<=2,base 16&#41;&#58;
==========================================

iteration 5&#58;  4  8  16  32  64
iteration 6&#58;  8 16  32  64 128  256
iteration 7&#58; 10 20  40  80 160  320 640
iteration 8&#58; 12 24  48  96 192  384 768  1536
iteration 9&#58;&#91;16 32  64 128 256&#93; 512 1024 2048 4096  
iteration10&#58; 16 32 ...

 -> base reached,width cannot "grow" further , ebf==2
 -> gain until ebf&#40;growing with iteration&#41; becomes upper limit
 -> quantity + quality raises with iteration depending on "WIDTH" until limit.

example&#58; nonLinear reduction scheme like -> 

   newdepth = depth - &#40;depth<7 ? bsr64&#40;movecount&#41; &#58; 1&#41;;

Sorry for the question but what are the little numbers 4 8 16 32 64 ? Are the exact number of searched moves for each node ?

Because I see the reduction formula, but it is not clear to me why a depth reduction should hard limit the moves per node to 4,8,16 and so on...

User avatar
Desperado
Posts: 638
Joined: Mon Dec 15, 2008 10:45 am

Re: Delayed Fullwidth Search

Post by Desperado » Sat Apr 30, 2011 11:23 am

mcostalba wrote:
Desperado wrote:

Code: Select all

DELAYED FULLWIDTH SEARCH&#40;EBF<=2,base 16&#41;&#58;
==========================================

iteration 5&#58;  4  8  16  32  64
iteration 6&#58;  8 16  32  64 128  256
iteration 7&#58; 10 20  40  80 160  320 640
iteration 8&#58; 12 24  48  96 192  384 768  1536
iteration 9&#58;&#91;16 32  64 128 256&#93; 512 1024 2048 4096  
iteration10&#58; 16 32 ...

 -> base reached,width cannot "grow" further , ebf==2
 -> gain until ebf&#40;growing with iteration&#41; becomes upper limit
 -> quantity + quality raises with iteration depending on "WIDTH" until limit.

example&#58; nonLinear reduction scheme like -> 

   newdepth = depth - &#40;depth<7 ? bsr64&#40;movecount&#41; &#58; 1&#41;;

Sorry for the question but what are the little numbers 4 8 16 32 64 ? Are the exact number of searched moves for each node ?

Because I see the reduction formula, but it is not clear to me why a depth reduction should hard limit the moves per node to 4,8,16 and so on...
i thought of this:

if you look at the numbers in the brackets ([16,32,64...]), these numbers
are full-width searched. in this case it means if we have 16 nodes at ply
1 and ebf==2 we finally end up with a nodecount of 4096 after finishing iteration 9.

before iteration 9 we limit the _width_ as example 4,8,10,12 (depnding on depth, instead of 16) for ply 1, so
end up with , again as example, a nodecount of 1536 (instead of 2048) for iteration 8.
This progress is recursivly of course (and so the numbers should be smaller.But anyway,
because we never search with more width than we would do in a linear approach,
i simply used the ebf of 2 to produce the numbers) and the numbers are not related to the given formulars.
the formulars only should indicate such a behaviour.

Michael

mcostalba
Posts: 2684
Joined: Sat Jun 14, 2008 7:17 pm

Re: Delayed Fullwidth Search

Post by mcostalba » Sat Apr 30, 2011 11:42 am

Desperado wrote: i simply used the ebf of 2 to produce the numbers) and the numbers are not related to the given formulars.
the formulars only should indicate such a behaviour.
So, just to be sure, the only formula that you are investigating is this one:

Code: Select all

newdepth = depth - &#40;depth<7 ? bsr64&#40;movecount&#41; &#58; 1&#41;; 
Correct ?

User avatar
Desperado
Posts: 638
Joined: Mon Dec 15, 2008 10:45 am

Re: Delayed Fullwidth Search

Post by Desperado » Sat Apr 30, 2011 11:51 am

mcostalba wrote:
Desperado wrote: i simply used the ebf of 2 to produce the numbers) and the numbers are not related to the given formulars.
the formulars only should indicate such a behaviour.
So, just to be sure, the only formula that you are investigating is this one:

Code: Select all

newdepth = depth - &#40;depth<7 ? bsr64&#40;movecount&#41; &#58; 1&#41;; 
Correct ?
if you like it, then yes :lol:.

At least the formular should include the following content.

Code: Select all

"width reduction"&#58; like bsr64&#40;movecount&#41;

which is

"depth sensitive"&#58; like depth<7
or maybe somthing like

Code: Select all

newdepth = depth - bsr64&#40;maxN&#40;1,movecount-depth&#41;);
So, correct :)

Post Reply