Code: Select all
Rank Name Elo + - games score oppo. draws
1 Giraffe ncs 13 6 6 3210 53% -13 16%
2 Giraffe dcs -13 6 6 3210 47% 13 16%
Obviously your mileage will vary.
Moderators: hgm, Rebel, chrisw
Code: Select all
Rank Name Elo + - games score oppo. draws
1 Giraffe ncs 13 6 6 3210 53% -13 16%
2 Giraffe dcs -13 6 6 3210 47% 13 16%
Code: Select all
double lmr_node_mult[256] {1.25, 1.25, 1.20, 1.15, 1.10, 1.05, 1.0, 1.0, 0.95, 0.90 ...}
That is a pretty interesting idea and makes LMR more reasonable.voyagerOne wrote:I didn't spend time reading all the posts but had an idea that is somewhat similar:
We store the move count in our stack for each ply we play and use that count as a way to help predict lmr.
Example A:
ply1: 2
ply2: 1
ply3: 4
ply4: 2
Example B:
ply1: 13
ply2: 9
ply3: 22
ply4: 11
Let say at ply5 we do LMR. Example A may have a higher chance to fail high and we would need to do a full search compared to Example B.
It will be interesting to get data on the "move signature of the PV"
i.e. at which move number did we find the PV move on each ply.
We may than be able to predict LMR with better accuracy with a move signature.
Thanks!PK wrote:Good luck with Your project! I have read this thread only recently and I hope that it is possible to create a working engine with it, even if it turns out slightly weaker than normal alpha-beta. The only thing I don't quite understand is Your anti-LMR bias. After all LMR can be added to Your search very easily - just by building a table of multipliers for node counts of quiet moves, something like
Code: Select all
double lmr_node_mult[256] {1.25, 1.25, 1.20, 1.15, 1.10, 1.05, 1.0, 1.0, 0.95, 0.90 ...}
That can be said for all reduction techniques - given enough time it will still find them. However, if LMR makes the search find those strong moves later, it made the search worse (in terms of finding those moves).voyagerOne wrote:Of course LMR will reduce some good/strong moves.
However, it will eventually find them in the next iteration(s).
EBF doesn't really mean much when you are reducing so aggressively, because depth doesn't really mean much, and EBF doesn't mean anything without meaningful depth. Can you really say you have searched to depth 10, when you haven't seen 99.99% of the leaf nodes at depth 10?LMR significantly reduce the EBF and is the major factor that modern engines can reach extreme depth.
I think the two of you are "talking past" each other. LMR makes you need more plies to find a move, but in general you get there quicker because of LMR. So in terms of depth, it is worse. In terms of time, it is a winner, at least in the big majority of the chess positions...matthewlai wrote:That can be said for all reduction techniques - given enough time it will still find them. However, if LMR makes the search find those strong moves later, it made the search worse (in terms of finding those moves).voyagerOne wrote:Of course LMR will reduce some good/strong moves.
However, it will eventually find them in the next iteration(s).
Depth has been meaningless since the first search extension was added, because now some pathways are searched deeper than others. The old saying "all plies are not created equal" is spot-on today. The only thing that matters is time to find the right move.EBF doesn't really mean much when you are reducing so aggressively, because depth doesn't really mean much, and EBF doesn't mean anything without meaningful depth. Can you really say you have searched to depth 10, when you haven't seen 99.99% of the leaf nodes at depth 10?LMR significantly reduce the EBF and is the major factor that modern engines can reach extreme depth.
At this point, depth is just an arbitrary number (and so is EBF).
In my program (using node-count based search) I defined a simple equation to map depth to node budget, because both CECP and UCI deeply assume that the engine is depth based. My definition of depth is therefore completely arbitrary. With LMR, I feel like their definition of depth is almost as arbitrary.
Or represent the node budget as log(nodes). Division becomes subtraction. Scale it such that you can use integer arithmetic. And then you find that you have, in fact, the equivalent of fractional depthsmatthewlai wrote:Well, there is one unexpected problem. Overflow!
I am ready for 128-bit .