Optimizing internal parameters - for Bob

Discussion of anything and everything relating to chess playing software and machines.

Moderator: Ras

JohnS
Posts: 215
Joined: Sun Feb 24, 2008 2:08 am

Optimizing internal parameters - for Bob

Post by JohnS »

To Bob and other programmers:

Are the values of your internal parameters sensitive to the available search time?

For example, suppose the time control/hardware combination enables an average search depth of 15 ply. In this case, you find that a value of parameter_x = 100 is the optimum. Is it possible that for depth = 10, this value is no longer optimal and it should be replaced by parameter_x = 200? In this case, parameter_x is very sensitive to depth and I presume this makes optimizing hard. On the other hand, if in the second case the optimum is still 100 or say 102, it's not sensitive to depth and the issue doesn't arise.

Has anyone looked at this?
zullil
Posts: 6442
Joined: Tue Jan 09, 2007 12:31 am
Location: PA USA
Full name: Louis Zulli

Re: Optimizing internal parameters - for Bob

Post by zullil »

You might do better if you post this here.
bob
Posts: 20943
Joined: Mon Feb 27, 2006 7:30 pm
Location: Birmingham, AL

Re: Optimizing internal parameters - for Bob

Post by bob »

JohnS wrote:To Bob and other programmers:

Are the values of your internal parameters sensitive to the available search time?
Hard to say definitively. But I test at very short time controls, and occasionally run verifications at longer time controls, and I am not seeing any difference in playing skill when comparing an old and new crafty, regardless of the time control. Now if you ask "does crafty play better or worse compared to its opponents as time controls get longer or shorter?" then the answer is certainly yes. But this is not from tuning. There are lots of parts to a chess engine, including the simple part of determining how much time to use and when to use extra time. This can behave differently at different time controls in ways that are completely unexpected.


For example, suppose the time control/hardware combination enables an average search depth of 15 ply. In this case, you find that a value of parameter_x = 100 is the optimum. Is it possible that for depth = 10, this value is no longer optimal and it should be replaced by parameter_x = 200? In this case, parameter_x is very sensitive to depth and I presume this makes optimizing hard. On the other hand, if in the second case the optimum is still 100 or say 102, it's not sensitive to depth and the issue doesn't arise.

Has anyone looked at this?
We can't do it for every test. But we did do it for quite a few when we first started to determine if we needed to play games at varying time controls specifically to avoid this problem. We have discovered that if we change time allocation, we do need to try various time controls, no increment, with increment, sudden death, repeating time controls, to make sure that the changes do not negatively impact play at some speeds as opposed to others.

You can clearly tune an evaluation to play better fast chess, if that is the goal. We've tried to avoid that since we don't just play very fast games.