Using Heinz in 2010 is not optimal

Discussion of chess software programming and technical issues.

Moderators: bob, hgm, Harvey Williamson

Forum rules
This textbox is used to restore diagrams posted with the [d] tag before the upgrade.
Post Reply
Milos
Posts: 3387
Joined: Wed Nov 25, 2009 12:47 am

Using Heinz in 2010 is not optimal

Post by Milos » Fri Jan 01, 2010 10:08 pm

Many are still following Heinz results. R=2 at root, R=3 at leaves. It's simply wrong!
Few simple facts.
The depths when this was established and tested were 8, 10 and 12.
Typical depths today are 15-20.
Even with 8, 10, 12, adaptive model R=2/3 worked the best. So, not only 2 or 3 was the best. Conclusion, optimal R depends on depth.

It is also evident that R=3 is too small for today's depths. Maybe not on Crafty but on few other engines (SF, Romi, Robbo) it is evident.

Question for Bob, when you tested and concluded that R>3 doesn't work, what was your typical test depth? If you tested it like Heinz (8, 10, 12), your tests are meaningless.

Aleks Peshkov
Posts: 870
Joined: Sun Nov 19, 2006 8:16 pm
Location: Russia

Re: Using Heinz in 2010 is not optimal

Post by Aleks Peshkov » Fri Jan 01, 2010 11:51 pm

Milos wrote:Many are still following Heinz results. R=2 at root, R=3 at leaves. It's simply wrong!
1) R=2 at leaves, R=3 at rest.
2) Null-move is recursive, so depth reduction can quickly gone much more.

Uri Blass
Posts: 8558
Joined: Wed Mar 08, 2006 11:37 pm
Location: Tel-Aviv Israel

Re: Using Heinz in 2010 is not optimal

Post by Uri Blass » Sat Jan 02, 2010 3:55 am

Milos wrote:Many are still following Heinz results. R=2 at root, R=3 at leaves. It's simply wrong!
Few simple facts.
The depths when this was established and tested were 8, 10 and 12.
Typical depths today are 15-20.
Even with 8, 10, 12, adaptive model R=2/3 worked the best. So, not only 2 or 3 was the best. Conclusion, optimal R depends on depth.

It is also evident that R=3 is too small for today's depths. Maybe not on Crafty but on few other engines (SF, Romi, Robbo) it is evident.

Question for Bob, when you tested and concluded that R>3 doesn't work, what was your typical test depth? If you tested it like Heinz (8, 10, 12), your tests are meaningless.
Some facts:

1)Bob did not use fixed depth.

2)The optimal R is dependent on the program and not only on the depth.

3)The difference between programs of today and programs of the time of Heinz is not only the depth but also in other factors.

It is possible that the optimal R is different for different evaluation function or for different searches.
I believe that having checks in the qsearch increase the optimal R.
I believe that Crafty R=3 became better than R=2/3 only after Bob added checks in the first ply of the qsearch.

4)I do not know if bigger R at high depth is generally better and if it is the case then Bob could find that R=3/4 is better for Crafty than R=3.

I do not know what Bob tested and it may be interesting if he report result of R=3(at depth<12 plies) and R=4 at depth>=12 plies to see if it is better or worse than simple R=3.

I assume that he did some tests to find that R=3 is better than R=4/3
but I do not know what is the maximal limit that he tried R=4/3 and it is possible that he did not try R=4 only at depth>=n for n that is big enough.


Another possible idea is to use always R=3 in the first N nodes of the search(for some constant N that may be 500,000 or a different number)and if you get depth X after N nodes then later use R=4 for depth>X

Uri

Milos
Posts: 3387
Joined: Wed Nov 25, 2009 12:47 am

Re: Using Heinz in 2010 is not optimal

Post by Milos » Sat Jan 02, 2010 2:35 pm

Few good points and this:
Uri Blass wrote:Another possible idea is to use always R=3 in the first N nodes of the search(for some constant N that may be 500,000 or a different number)and if you get depth X after N nodes then later use R=4 for depth>X
might be especially useful since in endgame where depth goes high quite easy, having large R can really hurt.

bob
Posts: 20478
Joined: Mon Feb 27, 2006 6:30 pm
Location: Birmingham, AL

Re: Using Heinz in 2010 is not optimal

Post by bob » Sun Jan 03, 2010 7:11 pm

Milos wrote:Many are still following Heinz results. R=2 at root, R=3 at leaves. It's simply wrong!
Few simple facts.
The depths when this was established and tested were 8, 10 and 12.
Typical depths today are 15-20.
Even with 8, 10, 12, adaptive model R=2/3 worked the best. So, not only 2 or 3 was the best. Conclusion, optimal R depends on depth.

It is also evident that R=3 is too small for today's depths. Maybe not on Crafty but on few other engines (SF, Romi, Robbo) it is evident.

Question for Bob, when you tested and concluded that R>3 doesn't work, what was your typical test depth? If you tested it like Heinz (8, 10, 12), your tests are meaningless.
First, I assume you mean R=3 near the root and R=2 near the tips, which is wat adaptive null-move does?

I tried it up to 60+60 games, because I wanted to see deeper depths. 60+60 turns into 20+ plies typically.

The problem with null-move is not the positions near the root, the problem is the positions near the tip. There are a _lot_ of them. And shallow searches overlook stuff and cause erroneous null-move fail-highs, and there are so many of these that some back up and influence the score for the best move. Null-move searches near the root are far less problematic, but large R makes the deeper positions produce less reliable information.

Milos
Posts: 3387
Joined: Wed Nov 25, 2009 12:47 am

Re: Using Heinz in 2010 is not optimal

Post by Milos » Sun Jan 03, 2010 9:07 pm

bob wrote: First, I assume you mean R=3 near the root and R=2 near the tips, which is wat adaptive null-move does?
Yes, Aleks already corrected me.
The problem with null-move is not the positions near the root, the problem is the positions near the tip. There are a _lot_ of them. And shallow searches overlook stuff and cause erroneous null-move fail-highs, and there are so many of these that some back up and influence the score for the best move. Null-move searches near the root are far less problematic, but large R makes the deeper positions produce less reliable information.
Cutting of more at root gives more savings and the potential damage is not so high (less reliable information is just a guess), that's exactly why it should be experimented with larger R for really high depths (>20).

BubbaTough
Posts: 1154
Joined: Fri Jun 23, 2006 3:18 am

Re: Using Heinz in 2010 is not optimal

Post by BubbaTough » Mon Jan 04, 2010 12:02 am

Uri Blass wrote: It is possible that the optimal R is different for different evaluation function or for different searches.
Uri
This is very obviously true (as I know you know) to anyone that has written their own chess program. How many times have you tried something that works for others, and found it does not work for you? For original program authors, the answer is usually hundreds or thousands I would think. In fact, you could almost use this as a litmus test. Anyone that does not believe this is probably not the author of an original program :twisted:.

-Sam

Post Reply