Average search depths

Discussion of chess software programming and technical issues.

Moderators: hgm, Rebel, chrisw

mike_bike_kite
Posts: 98
Joined: Tue Jul 26, 2011 12:18 am
Location: London

Average search depths

Post by mike_bike_kite »

Just curious but what depths do modern chess engines search to (say if thinking time is 1 sec). I'd assume bad moves would be removed at lower search depths and after that it would depend depend on the number of pieces on the board and how good the move ordering is etc.

Could anyone give me some rough ideas for different engines?

Mike
User avatar
Eelco de Groot
Posts: 4565
Joined: Sun Mar 12, 2006 2:40 am
Full name:   

Re: Average search depths

Post by Eelco de Groot »

In a typical, middle game, testposition I think that a Stockfish clone like Rainbow Serpent can do about 16 full plies in the first second. I would probably be less if the quiescence search or the tactical search (almost)explodes i.e if there are a lot of captures, that seems to be a bottleneck. This is on a reasonably fast quad processor Intel Q 6700 so with 4 threads, but not in 64 bits yet and a non optimized compile. In endgames this number goes up, but for some reason Jim Ablett's Stockfish compiles in endgames manages much higher nodes per second than RS so the difference between the two becomes greater in the endgame, for reasons I have to say I do not understand. In the midgame difference is not so pronounced.

I think programs like Critter 1.4, 1.6 and the free Houdini 1.5 are about as fast there, comparing them in a middlegame testposition, as fast as Rainbow Serpent and Stockfish. Houdini sees things quicker tactically though for reasons I, again, do not know. I don't think you can see the tactical strength difference of Houdini 1.5 reflected in this number of plies in the first second.

In SMP mode Stockfish only starts output after one second so this number you ask stands out more, but because communication to the GUI is more difficult in SMP mode it is not precisely one second always. On the Athlon however, with only one thread, there is output already from the first ply onward, the output there starts much sooner than after one second.

I definitely could see a small speed up after reusing static evaluations in the nullmove search in this number of plies in the first second and in the nodes per second number, it does have some use as a speed measurement. Depth to say twentyfive, thirty plies is much different for the programs I mentioned, because of different pruning characteristics I suppose (I only know the Stockfish code). Stockfish gets there earlier than any Ivanhoe or Houdini. It will probably miss some things on the way but that is the price for pruning a lot.

Compare this to the tabletop chess computers of the fast especially without hash tables. It would take hours to get to ten plies I think, just a rough estimate. You can not really compare the way the programs work and without hashtables you just can't get a depth of sixteen plies, it would take days I think, if not more? They had a two or four Megaherz processor like the Motorola 6502 compared to two to four (overclocked) Gigaherz Intel Xeon today, that is a factor 1000 in processor speed alone :) !

Regards, Eelco
Debugging is twice as hard as writing the code in the first
place. Therefore, if you write the code as cleverly as possible, you
are, by definition, not smart enough to debug it.
-- Brian W. Kernighan
mike_bike_kite
Posts: 98
Joined: Tue Jul 26, 2011 12:18 am
Location: London

Re: Average search depths

Post by mike_bike_kite »

Many thanks Eelco, that was very informative.
Uri Blass
Posts: 10281
Joined: Thu Mar 09, 2006 12:37 am
Location: Tel-Aviv Israel

Re: Average search depths

Post by Uri Blass »

Eelco de Groot wrote:In a typical, middle game, testposition I think that a Stockfish clone like Rainbow Serpent can do about 16 full plies in the first second. I would probably be less if the quiescence search or the tactical search (almost)explodes i.e if there are a lot of captures, that seems to be a bottleneck. This is on a reasonably fast quad processor Intel Q 6700 so with 4 threads, but not in 64 bits yet and a non optimized compile. In endgames this number goes up, but for some reason Jim Ablett's Stockfish compiles in endgames manages much higher nodes per second than RS so the difference between the two becomes greater in the endgame, for reasons I have to say I do not understand. In the midgame difference is not so pronounced.

I think programs like Critter 1.4, 1.6 and the free Houdini 1.5 are about as fast there, comparing them in a middlegame testposition, as fast as Rainbow Serpent and Stockfish. Houdini sees things quicker tactically though for reasons I, again, do not know. I don't think you can see the tactical strength difference of Houdini 1.5 reflected in this number of plies in the first second.

In SMP mode Stockfish only starts output after one second so this number you ask stands out more, but because communication to the GUI is more difficult in SMP mode it is not precisely one second always. On the Athlon however, with only one thread, there is output already from the first ply onward, the output there starts much sooner than after one second.

I definitely could see a small speed up after reusing static evaluations in the nullmove search in this number of plies in the first second and in the nodes per second number, it does have some use as a speed measurement. Depth to say twentyfive, thirty plies is much different for the programs I mentioned, because of different pruning characteristics I suppose (I only know the Stockfish code). Stockfish gets there earlier than any Ivanhoe or Houdini. It will probably miss some things on the way but that is the price for pruning a lot.

Compare this to the tabletop chess computers of the fast especially without hash tables. It would take hours to get to ten plies I think, just a rough estimate. You can not really compare the way the programs work and without hashtables you just can't get a depth of sixteen plies, it would take days I think, if not more? They had a two or four Megaherz processor like the Motorola 6502 compared to two to four (overclocked) Gigaherz Intel Xeon today, that is a factor 1000 in processor speed alone :) !

Regards, Eelco
My opinion is that the tabletop chess computers do not do productive pruning and it is not only about hash tables.

My guess is that stockfish or houdini without hash can get depth 16 in the middle game after at most some hours even if they are forced to use some old hardware that is 1000 times slower.
wgarvin
Posts: 838
Joined: Thu Jul 05, 2007 5:03 pm
Location: British Columbia, Canada

Re: Average search depths

Post by wgarvin »

Uri Blass wrote:My guess is that stockfish or houdini without hash can get depth 16 in the middle game after at most some hours even if they are forced to use some old hardware that is 1000 times slower.
Ignoring all of the context except "hardware that is 1000 times slower": 1000 seconds is about 16.7 minutes. :wink:

Even if the real performance difference is more like 10000x, that one second still only becomes about 2.8 hours. Multiply by some factor for not having a hashtable, but its in the ballpark at least.

Compared to those 80's era CPUs with piddling little RAM, modern desktop machines are absolute monsters. But of course the engines are much cleverer now too. And much better tuned for the target hardware using a quantity of brute force CPU cycles which only giant government agencies could call on in the 80's, and now we all have it on our desks and even in our phones! Amazing how times change.