Well, I never even look at "selective depth" as it can have different meanings for different engines, and anyway it's not an important number. Maybe if everyone used the Stockfish definition it would be somewehat useful. I'm always referring to the normal depth, i.e. the iteration counter.elcabesa wrote:when you talk about selective depths what do you means?
I have always used the deepest node searched by qsearch function, but I have just noticed Stockfish uses the deepest node searched by the main search function.
Stockfish depth vs. others; challenge
Moderators: hgm, Rebel, chrisw
-
- Posts: 5960
- Joined: Sun Jan 10, 2010 6:15 am
- Location: Maryland USA
Re: Stockfish depth vs. others; challenge
-
- Posts: 1221
- Joined: Wed Mar 08, 2006 8:28 pm
- Location: Florida, USA
Re: Stockfish depth vs. others; challenge
Hi Marco,
That explains the difference I noted. I think most engines show the deepest nodes searched in the qsearch - Maverick does this. So if Stockfish has a different definition then this is the explanation.
Thanks,
Steve
That explains the difference I noted. I think most engines show the deepest nodes searched in the qsearch - Maverick does this. So if Stockfish has a different definition then this is the explanation.
Thanks,
Steve
http://www.chessprogramming.net - Maverick Chess Engine
-
- Posts: 893
- Joined: Mon Jan 15, 2007 11:23 am
- Location: Warsza
Re: Stockfish depth vs. others; challenge
My guess is that Stockfish benefits primarily from synergy, i.e. it can do big reductions because it already reaches amazing depths (i.e. because it already reduces a lot).
I remember that I failed miserably trying to implement some of Stockfish reductions/prunings in Rodent a year or two ago. similiar attempts made this year were much more successful, with one caveat: Rodent is now mature enough to absorb the ideas from Stockfish code, but each and every time I have to use more conservative numbers, since risk/reward ratio is different at lower depths.
I remember that I failed miserably trying to implement some of Stockfish reductions/prunings in Rodent a year or two ago. similiar attempts made this year were much more successful, with one caveat: Rodent is now mature enough to absorb the ideas from Stockfish code, but each and every time I have to use more conservative numbers, since risk/reward ratio is different at lower depths.
Pawel Koziol
http://www.pkoziol.cal24.pl/rodent/rodent.htm
http://www.pkoziol.cal24.pl/rodent/rodent.htm
-
- Posts: 2684
- Joined: Sat Jun 14, 2008 9:17 pm
Re: Stockfish depth vs. others; challenge
Yes, selective depth is measured only on main search. On qsearch there can be positions that greatly recurs in qsearch, but is not really searching it is just resolving the captures. So measuring on main search seems a bit more consistent to me.
The main extension of SF is not check extension, but singular search....and it is also the most important. It is this one that can grow a much bigger selective depth out of the normal iteration depth.
The main extension of SF is not check extension, but singular search....and it is also the most important. It is this one that can grow a much bigger selective depth out of the normal iteration depth.
-
- Posts: 150
- Joined: Thu Mar 09, 2006 11:18 pm
Re: Stockfish depth vs. others; challenge
Marco,
The depth search of Stockfish is impressive and I see improvements on the search also on very recent versions, but still it looks to me that the setting is not optimized. With a setting I made "called" Ratassada, I am getting better results regardless the engine version.
Basically I see Stockfish strongera in middlegame and endgames even without TB.
I admit my books probably help a lot the engine by giving suitable openings, but I get at least +85 Elo against Houdini 3 Pro with Perfect 2012 CTG, also at fast blitz games.
Basically the changes on the setting were to improve first part of the game and midlle game without modifying/effecting the end game.
I admit the number of games is not significant to make conclusions.
I am willing to share this setting for intensive testings...
Ciao
Sandro
The depth search of Stockfish is impressive and I see improvements on the search also on very recent versions, but still it looks to me that the setting is not optimized. With a setting I made "called" Ratassada, I am getting better results regardless the engine version.
Basically I see Stockfish strongera in middlegame and endgames even without TB.
I admit my books probably help a lot the engine by giving suitable openings, but I get at least +85 Elo against Houdini 3 Pro with Perfect 2012 CTG, also at fast blitz games.
Basically the changes on the setting were to improve first part of the game and midlle game without modifying/effecting the end game.
I admit the number of games is not significant to make conclusions.
I am willing to share this setting for intensive testings...
Ciao
Sandro
Sandro Necchi
-
- Posts: 2684
- Joined: Sat Jun 14, 2008 9:17 pm
Re: Stockfish depth vs. others; challenge
Thanks for your willing to share: please post them on SF dev forum here:sandro Necchi wrote: I am willing to share this setting for intensive testings...
https://groups.google.com/forum/?fromgr ... ishcooking
And we will properly test them.
-
- Posts: 4185
- Joined: Tue Mar 14, 2006 11:34 am
- Location: Ethiopia
Re: Stockfish depth vs. others; challenge
This seems to me like all guesses. Why not turn off all prunings and then add them one by one, or better yet start from glaurung and add the improvments. That must tell where the depth is coming from though I suspect it is a collective effect from small contributions. Heavy reductions + almost no extensions + light q-search already gets you big depths as Ryan pointed out.lkaufman wrote:
If we compare to Ivanhoe, the extension rules are mor stingy in Stockfish, but not way more. Maybe this accounts for a quarter ply or so. The futility pruning seems fairly similar to Ivanhoe in terms of net effect, I doubt that they differ by more than a quarter play anyway. I'll accept your estimate that SF LMR adds 2 ply. Regarding evaluation, stockfish is certainly faster than Komodo, but probably not faster (certainly not much faster) than Ivanhoe. More aggressive null move (which you didn't mention) might add another ply. So maybe we have accounted for 3.5 plies difference. But we typically see differences around 10 plies, and much more than that in endgames! So there is still a lot to explain.
-
- Posts: 7220
- Joined: Mon May 27, 2013 10:31 am
Re: Stockfish depth vs. others; challenge
Maybe the first step is to make a chess program scalable. If depth is increased and the program plays worse, than that means it is not scalable. So reductions/extensions/pruning like for instance futility pruning and razoring should only use boundaries that are not constants but formulas computed out of current maximum search depth. Same holds for R should be something like for instance c1 + c2 * maxDepth. [Perhaps now c1 gives another minor scalability problem]lkaufman wrote:Same for Komodo. Presumably if Stockfish decreased R to Ippo-like levels in both LMR and Null move it would get weaker, else they would do that. So why can Stockfish get away with reductions that everyone else finds to be quite bad?Henk wrote:In my chess program the fastest way to reach huge search depths is to increment R in both LMR and Null move. But that makes my program only play worse.
-
- Posts: 5960
- Joined: Sun Jan 10, 2010 6:15 am
- Location: Maryland USA
Re: Stockfish depth vs. others; challenge
I agree, it must be the sum of lots of things. But I note that Stockfish DD actually does FULL ply check extensions (when not losing) as well as singular extensions, so I don't think lack of extensions can explain very much. Really the puzzle for me is more how Stockfish can get away with so much reducing/pruning when no one else seems to be able to do so. If I made Komodo reduce and prune as much as SF it would not be a top engine.Daniel Shawul wrote:This seems to me like all guesses. Why not turn off all prunings and then add them one by one, or better yet start from glaurung and add the improvments. That must tell where the depth is coming from though I suspect it is a collective effect from small contributions. Heavy reductions + almost no extensions + light q-search already gets you big depths as Ryan pointed out.lkaufman wrote:
If we compare to Ivanhoe, the extension rules are mor stingy in Stockfish, but not way more. Maybe this accounts for a quarter ply or so. The futility pruning seems fairly similar to Ivanhoe in terms of net effect, I doubt that they differ by more than a quarter play anyway. I'll accept your estimate that SF LMR adds 2 ply. Regarding evaluation, stockfish is certainly faster than Komodo, but probably not faster (certainly not much faster) than Ivanhoe. More aggressive null move (which you didn't mention) might add another ply. So maybe we have accounted for 3.5 plies difference. But we typically see differences around 10 plies, and much more than that in endgames! So there is still a lot to explain.
-
- Posts: 5960
- Joined: Sun Jan 10, 2010 6:15 am
- Location: Maryland USA
Re: Stockfish depth vs. others; challenge
We have found what you say to be true for null move but not for much else. But it works for Stockfish somehow.Henk wrote:Maybe the first step is to make a chess program scalable. If depth is increased and the program plays worse, than that means it is not scalable. So reductions/extensions/pruning like for instance futility pruning and razoring should only use boundaries that are not constants but formulas computed out of current maximum search depth. Same holds for R should be something like for instance c1 + c2 * maxDepth. [Perhaps now c1 gives another minor scalability problem]lkaufman wrote:Same for Komodo. Presumably if Stockfish decreased R to Ippo-like levels in both LMR and Null move it would get weaker, else they would do that. So why can Stockfish get away with reductions that everyone else finds to be quite bad?Henk wrote:In my chess program the fastest way to reach huge search depths is to increment R in both LMR and Null move. But that makes my program only play worse.