Optimization algorithm for the Texel/Gaviota tuning method
Moderator: Ras
-
- Posts: 154
- Joined: Thu Oct 03, 2013 4:17 pm
Re: Optimization algorithm for the Texel/Gaviota tuning meth
I'm not sure what you mean, column F is the engine evaluation. In this case our evaluation function only has one parameter but there is nothing different between W having one entry and W having 1000 entries, it's still equal to the sum(i = 1, n, Wi * existsWeight(Wi, p)). You can choose to not update all but one parameter if you only want to tune one but if your evaluation function uses some parameter it should be added to W. Oh and K should probably be bigger as well, with K = 0.5 convergence over here is far too slow to be practical. This is why I recommend some interactive strategy. Something like multiply K by 1.1 every successful iteration and multiply by 0.5 every failed iteration (of course if an iteration fails W should be restored to the last succesfull update).
Functional programming combines the flexibility and power of abstract mathematics with the intuitive clarity of abstract mathematics.
https://github.com/mAarnos
https://github.com/mAarnos
-
- Posts: 4846
- Joined: Sun Aug 10, 2008 3:15 pm
- Location: Philippines
Re: Optimization algorithm for the Texel/Gaviota tuning meth
I have presented the table which item you are not sure of?Mikko wrote:I'm not sure what you mean, column F is the engine evaluation.
No what I mean here is we will tune one parameter, this parameter has default value of 120 cp. Our evaluation function is of course has many parameters.Mikko wrote:In this case our evaluation function only has one parameter
Could you illustrate how you do it if you tune 2 parameters? Say add new parameter PawnValue with default value of 100. Putting it on table just like what I did would be better of course.Mikko wrote:but there is nothing different between W having one entry and W having 1000 entries, it's still equal to the sum(i = 1, n, Wi * existsWeight(Wi, p)). You can choose to not update all but one parameter if you only want to tune one but if your evaluation function uses some parameter it should be added to W.
Right I get that. I am not interested whether this will optimize or not, but I am interested on how GD is done.Mikko wrote: Oh and K should probably be bigger as well, with K = 0.5 convergence over here is far too slow to be practical. This is why I recommend some interactive strategy. Something like multiply K by 1.1 every successful iteration and multiply by 0.5 every failed iteration (of course if an iteration fails W should be restored to the last succesfull update).
-
- Posts: 154
- Joined: Thu Oct 03, 2013 4:17 pm
Re: Optimization algorithm for the Texel/Gaviota tuning meth
Here we have an evaluation function which only has two parameters, passedPawn and pawnValue. We compute gradients for both, and update both. If you only want to update one of them then set the gradient of the one you do not want to tune to 0. Easily extendable to as many parameters as you want.


Functional programming combines the flexibility and power of abstract mathematics with the intuitive clarity of abstract mathematics.
https://github.com/mAarnos
https://github.com/mAarnos
-
- Posts: 4846
- Joined: Sun Aug 10, 2008 3:15 pm
- Location: Philippines
Re: Optimization algorithm for the Texel/Gaviota tuning meth
That is excellent thank you.