Stockfish has the following code for calculating selective search depth
if (PvNode && thisThread->maxPly < ss->ply)
thisThread->maxPly = ss->ply;
I get different numbers for the selective depth if I remove the condition PvNode.
I wonder if there is a standard definition about the meaning of selective depth.
I can see from the uci protocol that it is
selective search depth in plies but it is not clear if it means the longest line that the program search in plies.
In the last case it is obvious that there is a mistake in the stockfish code and the correct code is
It makes sense to only measure selective search depth in PV nodes. It saves some cpu cycli and reporting selective search depth is not much more than a gimmick anyway.
syzygy wrote:It makes sense to only measure selective search depth in PV nodes. It saves some cpu cycli and reporting selective search depth is not much more than a gimmick anyway.
My opinion is that
in this case it is better not to report selective search depth because it is clearly misleading.
Today stockfish does not search more than 100 plies forward because it has in the code
const int MAX_PLY = 100
The wrong selective depth number that is always significantly smaller than 100 cause the illusion that stockfish never search 100 plies forward(maybe it also never get it but at least it is not proved.
syzygy wrote:It makes sense to only measure selective search depth in PV nodes. It saves some cpu cycli and reporting selective search depth is not much more than a gimmick anyway.
My opinion is that
in this case it is better not to report selective search depth because it is clearly misleading.
Today stockfish does not search more than 100 plies forward because it has in the code
const int MAX_PLY = 100
The wrong selective depth number that is always significantly smaller than 100 cause the illusion that stockfish never search 100 plies forward(maybe it also never get it but at least it is not proved.
If it satisfies your personal desires, just change your personal copy of the source... I am pretty sure the general public does not share your feelings.
syzygy wrote:It makes sense to only measure selective search depth in PV nodes. It saves some cpu cycli and reporting selective search depth is not much more than a gimmick anyway.
It saves cycles? What evidence do you have that it saves cycles?
syzygy wrote:It makes sense to only measure selective search depth in PV nodes. It saves some cpu cycli and reporting selective search depth is not much more than a gimmick anyway.
It saves cycles? What evidence do you have that it saves cycles?
That's too easy. EVERY compare, jump less than, and mov instruction group takes cycles. If you don't execute those instructions you definitely save the cycles they would have burned. Might not be a lot, but it is certainly greater than zero.
I think that maxply value is something useful for debugging, such as when you go fruit-like and grossly overextend and blow arrays. I've seen fruit go beyond 200 plies where the basic iteration depth is just 7 plies. That is certainly not reasonable, and displaying the number lets the author know that he has broken something and is inviting a non-terminating search. Otherwise, it is useless...
syzygy wrote:It makes sense to only measure selective search depth in PV nodes. It saves some cpu cycli and reporting selective search depth is not much more than a gimmick anyway.
It saves cycles? What evidence do you have that it saves cycles?
That's too easy. EVERY compare, jump less than, and mov instruction group takes cycles. If you don't execute those instructions you definitely save the cycles they would have burned. Might not be a lot, but it is certainly greater than zero.
I think that maxply value is something useful for debugging, such as when you go fruit-like and grossly overextend and blow arrays. I've seen fruit go beyond 200 plies where the basic iteration depth is just 7 plies. That is certainly not reasonable, and displaying the number lets the author know that he has broken something and is inviting a non-terminating search. Otherwise, it is useless...
I think that it is useful to get some bound for the ply that you need to search in games(of course if you define it correctly and not the dubious definition that is in the stockfish code) and if the target is to save cycles then it is better to get rid of selective depth.
If the program plays thousands of games and in no game it get selective depth that is bigger than 60 plies then it means that 60 plies are enough for it(the opposite in not correct and of course if it gets 200 plies it does not mean that the last 140 plies are useful for being stronger and testing is needed for it).
syzygy wrote:It makes sense to only measure selective search depth in PV nodes. It saves some cpu cycli and reporting selective search depth is not much more than a gimmick anyway.
It saves cycles? What evidence do you have that it saves cycles?
That's too easy. EVERY compare, jump less than, and mov instruction group takes cycles. If you don't execute those instructions you definitely save the cycles they would have burned. Might not be a lot, but it is certainly greater than zero.
No, not really. With instruction-level parallelism it is sometimes possible to execute some instructions for free, if they can be decoded and executed at the same time as some other instruction.
Actually, if PvNode weren't a compile-time constant (which it is, because it can be deduced from template parameters), the proposed code would likely be faster, because the check for `PvNode' would involve a hard-to-predict branch.
So it's possibly true that the original code is faster, but it is not obvious to me. Given how complex modern processors and compilers are, the only way to tell if something is slower is to measure it.
syzygy wrote:It makes sense to only measure selective search depth in PV nodes. It saves some cpu cycli and reporting selective search depth is not much more than a gimmick anyway.
This is weird logic. Basically you are saying that in stead of printing X, it makes sense to print another quantity Y because you can calculate that faster, and X was not very important anyway. Would it make sense to print 3.1415927 in stead of the node count, just because it saves cycles?
Printing seldepth is not an obligation. If you don't want to spend the cycles it would take to calculate it, just don't print it. That seems to make more sense to me than starting to lie about it.
syzygy wrote:It makes sense to only measure selective search depth in PV nodes. It saves some cpu cycli and reporting selective search depth is not much more than a gimmick anyway.
This is weird logic. Basically you are saying that in stead of printing X, it makes sense to print another quantity Y because you can calculate that faster, and X was not very important anyway. Would it make sense to print 3.1415927 in stead of the node count, just because it saves cycles?
Printing seldepth is not an obligation. If you don't want to spend the cycles it would take to calculate it, just don't print it. That seems to make more sense to me than starting to lie about it.
It's not a lie, it's just a particular definition of selective search depth. It still gives some indication of the longest lines that are being considered. And in a way only the PV lines are really lines that are being considered, the rest consists of lines that are being refuted.
Of course the reported value is not very useful for debugging purposes and/or knowing how big internal arrays should be (which seems to be what Uri wants). But there are debugging modes for that.