Why computing K that minimizes the sigmoid func. value?...

Discussion of chess software programming and technical issues.

Moderators: bob, hgm, Harvey Williamson

Forum rules
This textbox is used to restore diagrams posted with the [d] tag before the upgrade.
User avatar
cdani
Posts: 2104
Joined: Sat Jan 18, 2014 9:24 am
Location: Andorra
Contact:

Re: Why computing K that minimizes the sigmoid func. value?.

Post by cdani » Mon Jan 11, 2016 7:36 am

tpetzke wrote:
Also when you see CCRL 40/40 and CCRL 40/4 is clear something has gone bad, as 0.84 has more rating (in one CPU for the moment) in 40/4, when the objective is the other way, and not by a little margin but at least 30 elo.
But another interpretation could be that your eval is superior to the one of your opponents and so in short TC matches (where eval gets more pressure) your engine is very strong. In long TC matches your opponents can compensate a bit their worse eval by searching longer (eval gets a bit less pressure).

As I don't have the resources to test at long TC anyway I don't care. I tune at short TC and at long TC it is as it is.
When you accept the patches that win more at ltc, you end with an engine that is stronger at ltc.

I was no cautious enough at least for king safety and passed pawn ones, and as a result Andscacs 0.84 showed clear bad tuning and loses quite easy some games, thing that I think that happened less with previous version.

So the verification of this is that now as I retune by hand those parameters, at least with some ones I'm wining some strength back just returning to old values or near those old values, so those parameters where bad automatically tuned.

Of course I cannot be absolutely sure of this, because I cannot try really at longest time controls, but I expose what I have achieved working like this with previous versions.

Many times you see the effect with really fast time control games. So for example you can test at 5 seconds +0.03 and then at 11 + 0.03 and you already see that at 11 the change is less bad that at 5, or if you have luck, that is already good. Sometimes this does not scales further, so if is less bad at 11, better try at 20 or more if you are able to, or if not, better discard the patch.

tpetzke
Posts: 686
Joined: Thu Mar 03, 2011 3:57 pm
Location: Germany
Contact:

Re: Why computing K that minimizes the sigmoid func. value?.

Post by tpetzke » Mon Jan 11, 2016 7:52 am

Many times you see the effect with really fast time control games. So for example you can test at 5 seconds +0.03 and then at 11 + 0.03 and you already see that at 11 the change is less bad that at 5, ...
If the patch is bad a 5+0.03 I already throw it away. If it is a simplification and performs equal it is kept. Otherwise patches that are equal but add code are also thrown away.

Only patches related directly to TC are tested with different TCs.

But as I said, I don't do it because I think this method is superior (it probably is not) I do it because of my limited resources.

I find endgame related terms (like passed pawn) harder to tune than the others as many games end or are decided before the term is activated however the tuner thinks it can use the result of the game to adjust the weight of that term.

There is no easy fix for that in my framework, except for using only games that start from balanced endgame positions (which are also not easy to find).
Thomas...

=======
http://macechess.blogspot.com - iCE Chess Engine

Post Reply