I think you are stuck on alpha/beta and that's all what you know more seriously. In that thread you had no clue how a tree shape with reductions looks. You proposed something from 60es as the tree shape, the purely exponential, non-widening W^Depth.bob wrote:Laskos wrote:You seem to not know how a single core tree grows. Example is that 3 year old threadbob wrote:Come back to the discussion when you have some basic idea about what is being discussed. As of right now, you are arguing without a clue about what is happening inside a tree.syzygy wrote:With effort you can make Crafty beat Stockfish...bob wrote:But you CAN reduce the total with effort, and you can NOT reduce just the ones that are no good.syzygy wrote:Note that if some of it is helping, we're already done. That some part of the extra nodes that are helping is referred to as "widening".bob wrote:Explain HOW you are going to pull that off. You are already searching 2x the minimal tree for this position. Most of which is not helping you at all.
This is the thing, nothing more, nothing less.
Widening == not all "extra" nodes are worthless.
Nope. Firstly, you cannot duplicate the 4-core tree with a single threaded search without time penalty. Secondly, even if you could, it would only help in case of super-linear speedup which nobody is claiming.And IF those extra nodes help, search 'em in the regular search also, they will help there too.
http://www.talkchess.com/forum/viewtopi ... 83&t=38808
The pictures are gone, but it still can be understood, with Marco Costabla surely understanding it:With Zach Wegner having some plot of the tree behavior.mcostalba wrote:Laskos wrote:
Tree shape is a completely different matter, and as you can see, it is widening going to 5,10,15,20,25 _target_ depth. I can clearly separate EBF into two components: EBF_widening, which is ~1.4, and EBF_deepening, which is ~1.6, for a total EBF of 1.4*1.6 ~ 2.2. As I already wrote, going from target N to target N+1 requires a comparable effort in both deepening and widening.
This is very interesting, if I have understood correctly you say that the extra effort that we need to move the search at depth N+1 is due for a good 45% (1,4 vs 1,6) by additional search at the inner nodes and only for 55% by an additional search at the new deeper leaf nodes.You Bob missed the whole point, IIRC, and you have no idea how the tree grows even in single-cored Crafty iteration after iteration.Zach Wegner wrote:I did plot some Stockfish trees and, as expected since it uses logarithmic reductions, the trees have a huge amount of widening. In fact, the widening accounts for far more of the tree growth than deepening.Laskos wrote:
Try to talk that to Bob, he doesn't even know how to read a plot, where the slopes (first derivatives) are giving EBF(target depth, ply) on the tree, not the absolute values, and those slopes are both larger and smaller for different plies than the general EBF(target depth), giving a total pretty constant EBF(target depth) i.e. from target depth N to N+1 (different trees). I am really eager to see someone plot the tree shape of Stockfish for different target depths, there is no way it's not widening compared to the purely exponential, his "widening", which is not a widening at all (he seems to be confused about what widening of the tree is). Bob's theoretical tree shape is the purely exponential, non-widening W^D, how silly is that? I don't think he knows the tree shape even of his Crafty.
The term "widening" was simply wrong. ANY tree is wider near the tips than near the root. Big surprise. But it doesn't grow EXTRA wider as it goes deeper, just a fairly constant proportion.
BTW I knew how an alpha/beta tree grows before you knew how to spell the same.
With logartihmic reductions a la Stockfish, I plotted tree shapes as leaf nodes density , first Bob Hyatt no-clue theoretized tree shape, probablt not even valid for Crafty, then Stockfish, Marco Costabla and Zach Wegner tree shapes.
1/ This is Bob Hyatt theory on tree shape (from 60es?):
On iteration 20 the nodes to the depth 5 are identical in number to the nodes at the depth 5 on iteration 5.
2/ Stockfish model (logarithmic reductions):
As one can see, the shape of the tree to depth 5 is very different for iteration 20 and iteration 5. On iteration 20, Stockfish searches depth 5 almost full width, while high depths are heavily pruned, and are sparse. On iteration 5 depth 5 is sparse.
So, Stockfish gains against Bob's exponential tree 2 things:
1) Goes deeper by heavy pruning.
2) Tree thickens with each iteration, becoming almost full width at lower depths on high number of iterations.
And as Zach Wegner said, Stockfish spends more on inner nodes than on deepening.
As for Bob, if he cannot visualize the tree shape on 1 core, how can he see the parallelization?