## Texel tuning method question

Discussion of chess software programming and technical issues.

Moderators: Harvey Williamson, Dann Corbit, hgm

Forum rules
This textbox is used to restore diagrams posted with the [d] tag before the upgrade.
Cheney
Posts: 104
Joined: Thu Sep 27, 2012 12:24 am

### Re: Texel tuning method question

I'm sorry but I have not worked with derivatives (knowingly) since 1990 , but I only recall they are used to figure out the rate of change of a function, or something like that . That being said, I need to let what you wrote sink in and dust off my old calculus books (and read some more on line).

However, since it is apparent I do not fully understand what you wrote, let me share this sample code with you on what I think I need to do next in case my original post is misleading:

Code: Select all

``````bestK = 1.13 // I calculated the best K
bestE = 0.145667  // This is the minimum E when K was determined
delta = 10

oP1 = eval.pawn1 // A parameter we want to adjust and test
oR1 = eval.rook1 // Another parameter

for p1 = oP1-delta; p1 < oP1+delta; p1++
for p2 = oR1-delta; p2 < oR1+delta; p2++
// Set the penalty/bonus value of the eval parameters
eval.pawn1 = p1
eval.rook1 = p2

// Run the process of loading FENs, doing qSearch, and calculating sigmoid and E
E=DoQSearchAndGetE&#40;)

// Preserve the parameters if E was lowered.
if E < bestE
// these parameters lowered E, save them
bestP1 = p1
bestP2 = p2
bestE = E
end if
next p2
next p1

``````
This is how I see testing changing parameters to look for the best combination that lowers E. In the above sample, the two loops account for 100 tests. The more parameters I want to attempt to tune, time becomes a factor (unless I somehow multi-thread this).

Is what I am thinking correct on the process? Maybe I do not need to do a loop? I did just look at the pseudo code on CPW for the "local search" and, if I am reading it right, it only tunes a parameter by 1?

As for RuyTune and your offer to help, I did review the CPW writeup, went through some other posts about it, and glanced at the code. At this time, I'd like to continue to work on this method since I have put a fair amount of time into it. Once done, I'd like to take a deeper look at RuyTune.

AlvaroBegue
Posts: 925
Joined: Tue Mar 09, 2010 2:46 pm
Location: New York
Full name: Álvaro Begué (RuyDos)

### Re: Texel tuning method question

The code you posted is a brute-force search of all parameter settings. This doesn't scale very well. If you have 2 parameters to adjust, you are measuring a function at 400 different points (20^2). If you have 10 parameters to adjust, you'll never finish.

An alternative idea is to consider your function E as a function of several variables. The case of 2 variables is easy to visualize. Imagine you are in a terrain and you are trying to get to the lowest point possible. Just looking a little bit around you, you can figure out the direction of steepest descent where you are and take a step in that direction. Repeat this many times over, until you get to the bottom of some pit, where there is no direction in which you can continue to descend.

This is called "gradient descent", since the gradient (which is what reverse-mode automatic differentiation computes) tells you what the direction of steepest descent is. There are many many variations on this idea. Maybe with this initial explanation you can make some sense of what RuyTune is about.

Anyway, feel free to contact me whenever you want to give RuyTune a try.

Ferdy
Posts: 4309
Joined: Sun Aug 10, 2008 1:15 pm
Location: Philippines

### Re: Texel tuning method question

Sample basic Texel tuning code in python.

Tune.py

Code: Select all

``````"""
Sample Texel tuning in Python

Tested on python 3.6

"""

import sys
import random
from copy import deepcopy
import time

MAXBESTE = 1000.0

def DoQSearchAndGetE&#40;K&#41;&#58;
""" Just return random for now """
time.sleep&#40;0.5&#41; # Sleep for 0.5s for simulation purposes
return random.randint&#40;14400, 14600&#41;/100000.0

def SendParamToEngine&#40;param&#41;&#58;
""" Set the param to the engine, pass for now """
pass

def Tune&#40;paramToOptimize, initBestE, delta, K, iLog, cycleLimit&#41;&#58;
""" paramToOptimize is a list of list
initBestE is float
delta is a list
"""
goodCnt = 1
goodCntOld = 0
lastBestE = initBestE
lastBestParam = &#91;&#93;

for g in range&#40;cycleLimit&#41;&#58; # g = 0, 1, 2, ... cycleLimit-1
if iLog&#58;
print&#40;'Tune >> param cycle&#58; %d' %&#40;g+1&#41;)

# Exit tuning if goodCnt is not increased
if goodCnt <= goodCntOld&#58;
if iLog&#58;
print&#40;'Tune >> bestE has not been improved, exiting the tuning now ...')
break
goodCntOld = goodCnt

if len&#40;lastBestParam&#41;&#58;
paramToOptimize = deepcopy&#40;lastBestParam&#41;

for p in paramToOptimize&#58;
pName = p&#91;0&#93;
pValue = p&#91;1&#93;

for d in delta&#58;
dValue = pValue + d
if iLog&#58;
print&#40;'Tune >> param to optimize&#58; %s' %&#40;pName&#41;)
print&#40;'Tune >> delta&#58; %+d' %&#40;d&#41;)

# Create a new param set instead of paramToOptimize.
# This is the set that will be sent to the engine.
paramToBeTested = &#91;&#93;

for a in paramToOptimize&#58;
if a&#91;0&#93; == pName&#58;
a&#91;1&#93; = dValue
paramToBeTested.append&#40;&#91;a&#91;0&#93;, a&#91;1&#93;&#93;)

if iLog&#58;
print&#40;'Tune >> paramSet to try&#58; %s' %&#40;paramToBeTested&#41;)
print&#40;'Tune >> Send this set to engine')

# Send param values to engine
SendParamToEngine&#40;paramToBeTested&#41;

if iLog&#58;
print&#40;'Tune >> lastBestE&#58; %0.5f' %&#40;lastBestE&#41;)
print&#40;'Tune >> Calculate E')

E = DoQSearchAndGetE&#40;K&#41;

if iLog&#58;
print&#40;'Tune >> CalculatedE&#58; %0.5f' %&#40;E&#41;)

if E < lastBestE&#58;
goodCnt += 1
lastBestE = E

if iLog&#58;
print&#40;'Tune >> NewBestE&#58; %0.5f' %&#40;lastBestE&#41;)
print&#40;'Tune >> CalculatedE is good -------- !!\n')

lastBestParam = deepcopy&#40;paramToBeTested&#41;

# Get out of 'for delta' and try the next p
break
else&#58;
if iLog&#58;
print&#40;'Tune >> CalculatedE is not good ----- ?\n')

# Log if wer have reached cycle limit
if g == cycleLimit-1&#58;
if iLog&#58;
print&#40;'Tune >> param cycle limit has been reached, exiting tuning now ...')

return lastBestE, lastBestParam

def main&#40;argv&#41;&#58;

bestK = 1.13
bestESTart = MAXBESTE
enableLog = True
paramCycleLimit = 1000

paramToOptimize = &#91;
&#91;'pawn', 100&#93;,
&#91;'knight', 300&#93;,
&#91;'bishop', 300&#93;
&#93;
delta = &#91;+1, -1&#93;

# Show init values
print&#40;'origBestE       &#58; %0.5f' %&#40;bestESTart&#41;)
print&#40;'origParam       &#58; %s' %&#40;paramToOptimize&#41;)
print&#40;'bestK           &#58; %0.3f' %&#40;bestK&#41;)
print&#40;'delta           &#58; %s' %&#40;delta&#41;)
print&#40;'paramCycleLimit &#58; %d' %&#40;paramCycleLimit&#41;)
print&#40;'EnableLogging   &#58; %s\n' %('On' if enableLog else 'Off'))

t1 = time.clock&#40;)

# Run the tuner
optiE, optiParam = Tune&#40;paramToOptimize, bestESTart, delta, bestK, enableLog, paramCycleLimit&#41;

t2 = time.clock&#40;)

print&#40;'\nbestE   &#58; %0.5f' %&#40;optiE&#41;)
print&#40;'bestParam &#58; %s' %&#40;optiParam&#41;)
print&#40;'Elapsed   &#58; %ds' %&#40;t2-t1&#41;)

if __name__ == "__main__"&#58;
main&#40;sys.argv&#91;1&#58;&#93;)``````
Calculation of E is done by random number for simulation purposes.
Sample output:

Code: Select all

``````origBestE       &#58; 1000.00000
origParam       &#58; &#91;&#91;'pawn', 100&#93;, &#91;'knight', 300&#93;, &#91;'bishop', 300&#93;&#93;
bestK           &#58; 1.130
delta           &#58; &#91;1, -1&#93;
paramCycleLimit &#58; 1000
EnableLogging   &#58; On

Tune >> param cycle&#58; 1
Tune >> param to optimize&#58; pawn
Tune >> delta&#58; +1
Tune >> paramSet to try&#58; &#91;&#91;'pawn', 101&#93;, &#91;'knight', 300&#93;, &#91;'bishop', 300&#93;&#93;
Tune >> Send this set to engine
Tune >> lastBestE&#58; 1000.00000
Tune >> Calculate E
Tune >> CalculatedE&#58; 0.14514
Tune >> NewBestE&#58; 0.14514
Tune >> CalculatedE is good -------- !!

Tune >> param to optimize&#58; knight
Tune >> delta&#58; +1
Tune >> paramSet to try&#58; &#91;&#91;'pawn', 101&#93;, &#91;'knight', 301&#93;, &#91;'bishop', 300&#93;&#93;
Tune >> Send this set to engine
Tune >> lastBestE&#58; 0.14514
Tune >> Calculate E
Tune >> CalculatedE&#58; 0.14517
Tune >> CalculatedE is not good ----- ?

Tune >> param to optimize&#58; knight
Tune >> delta&#58; -1
Tune >> paramSet to try&#58; &#91;&#91;'pawn', 101&#93;, &#91;'knight', 299&#93;, &#91;'bishop', 300&#93;&#93;
Tune >> Send this set to engine
Tune >> lastBestE&#58; 0.14514
Tune >> Calculate E
Tune >> CalculatedE&#58; 0.14462
Tune >> NewBestE&#58; 0.14462
Tune >> CalculatedE is good -------- !!

Tune >> param to optimize&#58; bishop
Tune >> delta&#58; +1
Tune >> paramSet to try&#58; &#91;&#91;'pawn', 101&#93;, &#91;'knight', 299&#93;, &#91;'bishop', 301&#93;&#93;
Tune >> Send this set to engine
Tune >> lastBestE&#58; 0.14462
Tune >> Calculate E
Tune >> CalculatedE&#58; 0.14560
Tune >> CalculatedE is not good ----- ?

Tune >> param to optimize&#58; bishop
Tune >> delta&#58; -1
Tune >> paramSet to try&#58; &#91;&#91;'pawn', 101&#93;, &#91;'knight', 299&#93;, &#91;'bishop', 299&#93;&#93;
Tune >> Send this set to engine
Tune >> lastBestE&#58; 0.14462
Tune >> Calculate E
Tune >> CalculatedE&#58; 0.14596
Tune >> CalculatedE is not good ----- ?

Tune >> param cycle&#58; 2
Tune >> param to optimize&#58; pawn
Tune >> delta&#58; +1
Tune >> paramSet to try&#58; &#91;&#91;'pawn', 102&#93;, &#91;'knight', 299&#93;, &#91;'bishop', 300&#93;&#93;
Tune >> Send this set to engine
Tune >> lastBestE&#58; 0.14462
Tune >> Calculate E
Tune >> CalculatedE&#58; 0.14543
Tune >> CalculatedE is not good ----- ?

Tune >> param to optimize&#58; pawn
Tune >> delta&#58; -1
Tune >> paramSet to try&#58; &#91;&#91;'pawn', 100&#93;, &#91;'knight', 299&#93;, &#91;'bishop', 300&#93;&#93;
Tune >> Send this set to engine
Tune >> lastBestE&#58; 0.14462
Tune >> Calculate E
Tune >> CalculatedE&#58; 0.14435
Tune >> NewBestE&#58; 0.14435
Tune >> CalculatedE is good -------- !!

Tune >> param to optimize&#58; knight
Tune >> delta&#58; +1
Tune >> paramSet to try&#58; &#91;&#91;'pawn', 100&#93;, &#91;'knight', 300&#93;, &#91;'bishop', 300&#93;&#93;
Tune >> Send this set to engine
Tune >> lastBestE&#58; 0.14435
Tune >> Calculate E
Tune >> CalculatedE&#58; 0.14496
Tune >> CalculatedE is not good ----- ?

Tune >> param to optimize&#58; knight
Tune >> delta&#58; -1
Tune >> paramSet to try&#58; &#91;&#91;'pawn', 100&#93;, &#91;'knight', 298&#93;, &#91;'bishop', 300&#93;&#93;
Tune >> Send this set to engine
Tune >> lastBestE&#58; 0.14435
Tune >> Calculate E
Tune >> CalculatedE&#58; 0.14599
Tune >> CalculatedE is not good ----- ?

Tune >> param to optimize&#58; bishop
Tune >> delta&#58; +1
Tune >> paramSet to try&#58; &#91;&#91;'pawn', 100&#93;, &#91;'knight', 298&#93;, &#91;'bishop', 301&#93;&#93;
Tune >> Send this set to engine
Tune >> lastBestE&#58; 0.14435
Tune >> Calculate E
Tune >> CalculatedE&#58; 0.14575
Tune >> CalculatedE is not good ----- ?

Tune >> param to optimize&#58; bishop
Tune >> delta&#58; -1
Tune >> paramSet to try&#58; &#91;&#91;'pawn', 100&#93;, &#91;'knight', 298&#93;, &#91;'bishop', 299&#93;&#93;
Tune >> Send this set to engine
Tune >> lastBestE&#58; 0.14435
Tune >> Calculate E
Tune >> CalculatedE&#58; 0.14457
Tune >> CalculatedE is not good ----- ?

Tune >> param cycle&#58; 3
Tune >> param to optimize&#58; pawn
Tune >> delta&#58; +1
Tune >> paramSet to try&#58; &#91;&#91;'pawn', 101&#93;, &#91;'knight', 299&#93;, &#91;'bishop', 300&#93;&#93;
Tune >> Send this set to engine
Tune >> lastBestE&#58; 0.14435
Tune >> Calculate E
Tune >> CalculatedE&#58; 0.14533
Tune >> CalculatedE is not good ----- ?

Tune >> param to optimize&#58; pawn
Tune >> delta&#58; -1
Tune >> paramSet to try&#58; &#91;&#91;'pawn', 99&#93;, &#91;'knight', 299&#93;, &#91;'bishop', 300&#93;&#93;
Tune >> Send this set to engine
Tune >> lastBestE&#58; 0.14435
Tune >> Calculate E
Tune >> CalculatedE&#58; 0.14448
Tune >> CalculatedE is not good ----- ?

Tune >> param to optimize&#58; knight
Tune >> delta&#58; +1
Tune >> paramSet to try&#58; &#91;&#91;'pawn', 99&#93;, &#91;'knight', 300&#93;, &#91;'bishop', 300&#93;&#93;
Tune >> Send this set to engine
Tune >> lastBestE&#58; 0.14435
Tune >> Calculate E
Tune >> CalculatedE&#58; 0.14555
Tune >> CalculatedE is not good ----- ?

Tune >> param to optimize&#58; knight
Tune >> delta&#58; -1
Tune >> paramSet to try&#58; &#91;&#91;'pawn', 99&#93;, &#91;'knight', 298&#93;, &#91;'bishop', 300&#93;&#93;
Tune >> Send this set to engine
Tune >> lastBestE&#58; 0.14435
Tune >> Calculate E
Tune >> CalculatedE&#58; 0.14553
Tune >> CalculatedE is not good ----- ?

Tune >> param to optimize&#58; bishop
Tune >> delta&#58; +1
Tune >> paramSet to try&#58; &#91;&#91;'pawn', 99&#93;, &#91;'knight', 298&#93;, &#91;'bishop', 301&#93;&#93;
Tune >> Send this set to engine
Tune >> lastBestE&#58; 0.14435
Tune >> Calculate E
Tune >> CalculatedE&#58; 0.14553
Tune >> CalculatedE is not good ----- ?

Tune >> param to optimize&#58; bishop
Tune >> delta&#58; -1
Tune >> paramSet to try&#58; &#91;&#91;'pawn', 99&#93;, &#91;'knight', 298&#93;, &#91;'bishop', 299&#93;&#93;
Tune >> Send this set to engine
Tune >> lastBestE&#58; 0.14435
Tune >> Calculate E
Tune >> CalculatedE&#58; 0.14419
Tune >> NewBestE&#58; 0.14419
Tune >> CalculatedE is good -------- !!

Tune >> param cycle&#58; 4
Tune >> param to optimize&#58; pawn
Tune >> delta&#58; +1
Tune >> paramSet to try&#58; &#91;&#91;'pawn', 100&#93;, &#91;'knight', 298&#93;, &#91;'bishop', 299&#93;&#93;
Tune >> Send this set to engine
Tune >> lastBestE&#58; 0.14419
Tune >> Calculate E
Tune >> CalculatedE&#58; 0.14416
Tune >> NewBestE&#58; 0.14416
Tune >> CalculatedE is good -------- !!

Tune >> param to optimize&#58; knight
Tune >> delta&#58; +1
Tune >> paramSet to try&#58; &#91;&#91;'pawn', 100&#93;, &#91;'knight', 299&#93;, &#91;'bishop', 299&#93;&#93;
Tune >> Send this set to engine
Tune >> lastBestE&#58; 0.14416
Tune >> Calculate E
Tune >> CalculatedE&#58; 0.14444
Tune >> CalculatedE is not good ----- ?

Tune >> param to optimize&#58; knight
Tune >> delta&#58; -1
Tune >> paramSet to try&#58; &#91;&#91;'pawn', 100&#93;, &#91;'knight', 297&#93;, &#91;'bishop', 299&#93;&#93;
Tune >> Send this set to engine
Tune >> lastBestE&#58; 0.14416
Tune >> Calculate E
Tune >> CalculatedE&#58; 0.14539
Tune >> CalculatedE is not good ----- ?

Tune >> param to optimize&#58; bishop
Tune >> delta&#58; +1
Tune >> paramSet to try&#58; &#91;&#91;'pawn', 100&#93;, &#91;'knight', 297&#93;, &#91;'bishop', 300&#93;&#93;
Tune >> Send this set to engine
Tune >> lastBestE&#58; 0.14416
Tune >> Calculate E
Tune >> CalculatedE&#58; 0.14510
Tune >> CalculatedE is not good ----- ?

Tune >> param to optimize&#58; bishop
Tune >> delta&#58; -1
Tune >> paramSet to try&#58; &#91;&#91;'pawn', 100&#93;, &#91;'knight', 297&#93;, &#91;'bishop', 298&#93;&#93;
Tune >> Send this set to engine
Tune >> lastBestE&#58; 0.14416
Tune >> Calculate E
Tune >> CalculatedE&#58; 0.14425
Tune >> CalculatedE is not good ----- ?

Tune >> param cycle&#58; 5
Tune >> param to optimize&#58; pawn
Tune >> delta&#58; +1
Tune >> paramSet to try&#58; &#91;&#91;'pawn', 101&#93;, &#91;'knight', 298&#93;, &#91;'bishop', 299&#93;&#93;
Tune >> Send this set to engine
Tune >> lastBestE&#58; 0.14416
Tune >> Calculate E
Tune >> CalculatedE&#58; 0.14557
Tune >> CalculatedE is not good ----- ?

Tune >> param to optimize&#58; pawn
Tune >> delta&#58; -1
Tune >> paramSet to try&#58; &#91;&#91;'pawn', 99&#93;, &#91;'knight', 298&#93;, &#91;'bishop', 299&#93;&#93;
Tune >> Send this set to engine
Tune >> lastBestE&#58; 0.14416
Tune >> Calculate E
Tune >> CalculatedE&#58; 0.14561
Tune >> CalculatedE is not good ----- ?

Tune >> param to optimize&#58; knight
Tune >> delta&#58; +1
Tune >> paramSet to try&#58; &#91;&#91;'pawn', 99&#93;, &#91;'knight', 299&#93;, &#91;'bishop', 299&#93;&#93;
Tune >> Send this set to engine
Tune >> lastBestE&#58; 0.14416
Tune >> Calculate E
Tune >> CalculatedE&#58; 0.14535
Tune >> CalculatedE is not good ----- ?

Tune >> param to optimize&#58; knight
Tune >> delta&#58; -1
Tune >> paramSet to try&#58; &#91;&#91;'pawn', 99&#93;, &#91;'knight', 297&#93;, &#91;'bishop', 299&#93;&#93;
Tune >> Send this set to engine
Tune >> lastBestE&#58; 0.14416
Tune >> Calculate E
Tune >> CalculatedE&#58; 0.14404
Tune >> NewBestE&#58; 0.14404
Tune >> CalculatedE is good -------- !!

Tune >> param to optimize&#58; bishop
Tune >> delta&#58; +1
Tune >> paramSet to try&#58; &#91;&#91;'pawn', 99&#93;, &#91;'knight', 297&#93;, &#91;'bishop', 300&#93;&#93;
Tune >> Send this set to engine
Tune >> lastBestE&#58; 0.14404
Tune >> Calculate E
Tune >> CalculatedE&#58; 0.14599
Tune >> CalculatedE is not good ----- ?

Tune >> param to optimize&#58; bishop
Tune >> delta&#58; -1
Tune >> paramSet to try&#58; &#91;&#91;'pawn', 99&#93;, &#91;'knight', 297&#93;, &#91;'bishop', 298&#93;&#93;
Tune >> Send this set to engine
Tune >> lastBestE&#58; 0.14404
Tune >> Calculate E
Tune >> CalculatedE&#58; 0.14518
Tune >> CalculatedE is not good ----- ?

Tune >> param cycle&#58; 6
Tune >> param to optimize&#58; pawn
Tune >> delta&#58; +1
Tune >> paramSet to try&#58; &#91;&#91;'pawn', 100&#93;, &#91;'knight', 297&#93;, &#91;'bishop', 299&#93;&#93;
Tune >> Send this set to engine
Tune >> lastBestE&#58; 0.14404
Tune >> Calculate E
Tune >> CalculatedE&#58; 0.14426
Tune >> CalculatedE is not good ----- ?

Tune >> param to optimize&#58; pawn
Tune >> delta&#58; -1
Tune >> paramSet to try&#58; &#91;&#91;'pawn', 98&#93;, &#91;'knight', 297&#93;, &#91;'bishop', 299&#93;&#93;
Tune >> Send this set to engine
Tune >> lastBestE&#58; 0.14404
Tune >> Calculate E
Tune >> CalculatedE&#58; 0.14437
Tune >> CalculatedE is not good ----- ?

Tune >> param to optimize&#58; knight
Tune >> delta&#58; +1
Tune >> paramSet to try&#58; &#91;&#91;'pawn', 98&#93;, &#91;'knight', 298&#93;, &#91;'bishop', 299&#93;&#93;
Tune >> Send this set to engine
Tune >> lastBestE&#58; 0.14404
Tune >> Calculate E
Tune >> CalculatedE&#58; 0.14475
Tune >> CalculatedE is not good ----- ?

Tune >> param to optimize&#58; knight
Tune >> delta&#58; -1
Tune >> paramSet to try&#58; &#91;&#91;'pawn', 98&#93;, &#91;'knight', 296&#93;, &#91;'bishop', 299&#93;&#93;
Tune >> Send this set to engine
Tune >> lastBestE&#58; 0.14404
Tune >> Calculate E
Tune >> CalculatedE&#58; 0.14519
Tune >> CalculatedE is not good ----- ?

Tune >> param to optimize&#58; bishop
Tune >> delta&#58; +1
Tune >> paramSet to try&#58; &#91;&#91;'pawn', 98&#93;, &#91;'knight', 296&#93;, &#91;'bishop', 300&#93;&#93;
Tune >> Send this set to engine
Tune >> lastBestE&#58; 0.14404
Tune >> Calculate E
Tune >> CalculatedE&#58; 0.14522
Tune >> CalculatedE is not good ----- ?

Tune >> param to optimize&#58; bishop
Tune >> delta&#58; -1
Tune >> paramSet to try&#58; &#91;&#91;'pawn', 98&#93;, &#91;'knight', 296&#93;, &#91;'bishop', 298&#93;&#93;
Tune >> Send this set to engine
Tune >> lastBestE&#58; 0.14404
Tune >> Calculate E
Tune >> CalculatedE&#58; 0.14477
Tune >> CalculatedE is not good ----- ?

Tune >> param cycle&#58; 7
Tune >> bestE has not been improved, exiting the tuning now ...

bestE   &#58; 0.14404
bestParam &#58; &#91;&#91;'pawn', 99&#93;, &#91;'knight', 297&#93;, &#91;'bishop', 299&#93;&#93;
Elapsed   &#58; 18s``````
So you get your best param and best error
bestE : 0.14404
bestParam : [['pawn', 99], ['knight', 297], ['bishop', 299]]

In certain situation you can stop the tuning and record the best so far.
Remember the best error and param.
When you resume the tuning, you can now use the last best tuning info.
So your next input run for example would be:

Code: Select all

``````bestESTart =0.14404
paramToOptimize = &#91;
&#91;'pawn', 99&#93;,
&#91;'knight', 297&#93;,
&#91;'bishop', 299&#93;
&#93;``````
At a point where there is successful reduction of error like,

Code: Select all

``````Tune >> param cycle&#58; 4
Tune >> param to optimize&#58; pawn
Tune >> delta&#58; +1
Tune >> paramSet to try&#58; &#91;&#91;'pawn', 100&#93;, &#91;'knight', 298&#93;, &#91;'bishop', 299&#93;&#93;
Tune >> Send this set to engine
Tune >> lastBestE&#58; 0.14419
Tune >> Calculate E
Tune >> CalculatedE&#58; 0.14416
Tune >> NewBestE&#58; 0.14416
Tune >> CalculatedE is good -------- !!``````
Remember the param tuned:
[['pawn', 100], ['knight', 298], ['bishop', 299]]
You can use that to test your engine in actual matches, this is called sampling. It can happen that those values may actually perform in actual game test.

It is good to experiment varied delta array like,

Code: Select all

``delta = &#91;+1, -1, +2, -2, +3, -3&#93;``
Expand the param if you like.

Code: Select all

``````paramToOptimize = &#91;
&#91;'pawn', 100&#93;,
&#91;'knight', 300&#93;,
&#91;'bishop', 300&#93;,
&#91;'rook', 500&#93;,
&#91;'queen', 1000&#93;
&#93;``````

Ferdy
Posts: 4309
Joined: Sun Aug 10, 2008 1:15 pm
Location: Philippines

### Code bug fix

Tune.py

Code: Select all

``````"""
Sample Texel tuning in Python

Tested on python 3.6

"""

import sys
import random
from copy import deepcopy
import time

MAXBESTE = 1000.0

def DoQSearchAndGetE&#40;K&#41;&#58;
""" Just return random for now """
time.sleep&#40;0.5&#41; # Sleep for 0.5s for simulation purposes
return random.randint&#40;14400, 14600&#41;/100000.0

def SendParamToEngine&#40;param&#41;&#58;
""" Set the param to the engine, pass for now """
pass

def Tune&#40;paramToOptimize, initBestE, delta, K, iLog, cycleLimit&#41;&#58;
""" paramToOptimize is a list of list
initBestE is float
delta is a list
"""
goodCnt = 1
goodCntOld = 0
lastBestE = initBestE
lastBestParam = &#91;&#93;

for g in range&#40;cycleLimit&#41;&#58; # g = 0, 1, 2, ... cycleLimit-1
if iLog&#58;
print&#40;'Tune >> param cycle&#58; %d' %&#40;g+1&#41;)

# Exit tuning if goodCnt is not increased
if goodCnt <= goodCntOld&#58;
if iLog&#58;
print&#40;'Tune >> bestE has not been improved, exiting the tuning now ...')
break
goodCntOld = goodCnt

# Update paramToOptimize with lastBestParam
if len&#40;lastBestParam&#41;&#58;
paramToOptimize = deepcopy&#40;lastBestParam&#41;

for p in paramToOptimize&#58;
pName = p&#91;0&#93;
pValue = p&#91;1&#93;

for d in delta&#58;
dValue = pValue + d
if iLog&#58;
print&#40;'Tune >> param to optimize&#58; %s' %&#40;pName&#41;)
print&#40;'Tune >> delta&#58; %+d' %&#40;d&#41;)

# Update paramToOptimize with lastBestParam
if len&#40;lastBestParam&#41;&#58;
paramToOptimize = deepcopy&#40;lastBestParam&#41;

# Create a new param set instead of paramToOptimize.
# This is the set that will be sent to the engine.
paramToBeTested = &#91;&#93;

for a in paramToOptimize&#58;
if a&#91;0&#93; == pName&#58;
a&#91;1&#93; = dValue
paramToBeTested.append&#40;&#91;a&#91;0&#93;, a&#91;1&#93;&#93;)

if iLog&#58;
print&#40;'Tune >> paramSet to try&#58; %s' %&#40;paramToBeTested&#41;)
print&#40;'Tune >> Send this set to engine')

# Send param values to engine
SendParamToEngine&#40;paramToBeTested&#41;

if iLog&#58;
print&#40;'Tune >> lastBestE&#58; %0.5f' %&#40;lastBestE&#41;)
print&#40;'Tune >> Calculate E')

E = DoQSearchAndGetE&#40;K&#41;

if iLog&#58;
print&#40;'Tune >> CalculatedE&#58; %0.5f' %&#40;E&#41;)

if E < lastBestE&#58;
goodCnt += 1
lastBestE = E

if iLog&#58;
print&#40;'Tune >> NewBestE&#58; %0.5f' %&#40;lastBestE&#41;)
print&#40;'Tune >> CalculatedE is good -------- !!\n')

lastBestParam = deepcopy&#40;paramToBeTested&#41;

# Get out of 'for delta' and try the next p
break
else&#58;
if iLog&#58;
print&#40;'Tune >> CalculatedE is not good ----- ?\n')

# Log if wer have reached cycle limit
if g == cycleLimit-1&#58;
if iLog&#58;
print&#40;'Tune >> param cycle limit has been reached, exiting tuning now ...')

return lastBestE, lastBestParam

def main&#40;argv&#41;&#58;

bestK = 1.13
bestESTart = MAXBESTE
enableLog = True
paramCycleLimit = 1000

paramToOptimize = &#91;
&#91;'pawn', 100&#93;,
&#91;'knight', 300&#93;,
&#91;'bishop', 300&#93;
&#93;
delta = &#91;+1, -1&#93;

# Show init values
print&#40;'origBestE       &#58; %0.5f' %&#40;bestESTart&#41;)
print&#40;'origParam       &#58; %s' %&#40;paramToOptimize&#41;)
print&#40;'bestK           &#58; %0.3f' %&#40;bestK&#41;)
print&#40;'delta           &#58; %s' %&#40;delta&#41;)
print&#40;'paramCycleLimit &#58; %d' %&#40;paramCycleLimit&#41;)
print&#40;'EnableLogging   &#58; %s\n' %('On' if enableLog else 'Off'))

t1 = time.clock&#40;)

# Run the tuner
optiE, optiParam = Tune&#40;paramToOptimize, bestESTart, delta, bestK, enableLog, paramCycleLimit&#41;

t2 = time.clock&#40;)

print&#40;'\nbestE   &#58; %0.5f' %&#40;optiE&#41;)
print&#40;'bestParam &#58; %s' %&#40;optiParam&#41;)
print&#40;'Elapsed   &#58; %ds' %&#40;t2-t1&#41;)

if __name__ == "__main__"&#58;
main&#40;sys.argv&#91;1&#58;&#93;)``````

Code: Select all

``````# Update paramToOptimize with lastBestParam
if len&#40;lastBestParam&#41;&#58;
paramToOptimize = deepcopy&#40;lastBestParam&#41;``````
In
for d in delta:
dValue = pValue + d
if iLog:
print('Tune >> param to optimize: %s' %(pName))
print('Tune >> delta: %+d' %(d))

# Update paramToOptimize with lastBestParam
if len(lastBestParam):
paramToOptimize = deepcopy(lastBestParam)

# Create a new param set instead of paramToOptimize.
# This is the set that will be sent to the engine.
paramToBeTested = []

Cheney
Posts: 104
Joined: Thu Sep 27, 2012 12:24 am

### Re: Texel tuning method question

Yes, a brute force and I know that would take a long time . However, your explanation of the gradient descent was spot on!

I started to use that concept and after some time, I have some tuned numbers. I basically am testing with just about 3M positions now and have eight parameters I want to play with (all pawn items, like isolated, doubled, backwards, and chained for mg and eg). I know the values I currently have are good, so I let the tuning process start with those. To "walk" the parameters, I'd increase the first by one, test, if E is not lower, then I'd decrease the original value by 1. I'd then repeat this for the next parameter and repeat until there are no improvements and I'd record the improvements. A question on this is if I can walk one parameter until it no longer improves and then walk the next one? I figure there would be a point where original parameters are now no longer optimal. I need to test this

For some of the parameters, they stay close to the values I originally hand tuned while others go somewhere I did not expect - e.g., chained pawn in mg is a negative "bonus".

I tested the new values by having that engine play the engine that was hand tuned - the auto-tuned loses but only by a few ELO over 10K games.

I then zeroed out all my pawn parameters and started over. The new auto tuned values are still close to the previous auto tuned values (one might be off by +/- 1). The interesting thing is I had all three engines play - one with 0 values, one with hand tuned values, and one with auto tuned values.

The auto-tuned loses consistently while my hand tuned wins consistently.

Now maybe this just won't work for me, or I am doing something wrong or am missing the point. I'm not sure where to go with this. I was thinking of extracting positions from GM games or from games pulled from CCRL and testing to find out what happens. I am still working on this ...

Thanks again!

Cheney
Posts: 104
Joined: Thu Sep 27, 2012 12:24 am

### Re: Code bug fix

Thank you . I have not coded much with python but maybe this will help me start.

I have coded my tuning within the engine source. Based on your output sample, it looks like I am walking the parameters properly... test param1 + 1, look for better E, if not then test param - 1. I do have a question about this but have posted it in my other post from today.

jdart
Posts: 4036
Joined: Fri Mar 10, 2006 4:23 am
Location: http://www.arasanchess.org

### Re: Texel tuning method question

Typically with gradient descent you tune all parameters simultaneously.

The better methods for this use a variable step size, one that decreases as the method approaches convergence.

This has been an area in which there have been some fairly recent developments. See http://ruder.io/optimizing-gradient-descent/ for an overview.

A popular method is ADAM. That is what I use.

--Jon

AlvaroBegue
Posts: 925
Joined: Tue Mar 09, 2010 2:46 pm
Location: New York
Full name: Álvaro Begué (RuyDos)

### Re: Texel tuning method question

jdart wrote:Typically with gradient descent you tune all parameters simultaneously.

The better methods for this use a variable step size, one that decreases as the method approaches convergence.

This has been an area in which there have been some fairly recent developments. See http://ruder.io/optimizing-gradient-descent/ for an overview.

A popular method is ADAM. That is what I use.

--Jon
One important observation is that you don't need to evaluate the error on your entire training dataset before you start changing your parameters. Instead, you can just look at a small random subsample of your training dataset (say, 100 positions) and change the parameters a little bit to reduce the error when evaluated in that subset. This is called stochastic gradient descent with mini-batches. There are many learning algorithms (procedures to decide how to change the parameters in each step after you evaluate the gradient). Adam is a reasonable choice.

If you were using batch gradient descent (i.e., evaluating the gradient over the entire training set at each step), there are other options that might be better. In RuyTune I used L-BFGS, but not I am not convinced that was a good choice. In any case, I would recommend using mini-batches for most situations.

Ferdy
Posts: 4309
Joined: Sun Aug 10, 2008 1:15 pm
Location: Philippines

### Re: Texel tuning method question

Cheney wrote:A question on this is if I can walk one parameter until it no longer improves and then walk the next one?
In doing so you probably deprived the other parameters to achieve their optimal values. This is just the same as tuning the parameter one by one .
Peter had mentioned and I use it in tuning to sort the parameter according to the number of successful reduction of Error. Example:

Code: Select all

``````c = 0
param = &#91;
&#91;pawn, 100, c&#93;,
&#91;knight, 300, c&#93;,
&#91;bishop, 300, c&#93;
&#93;``````
Change pawn to 101, and if it is successful, increment the counter c.
That would become,

Code: Select all

``````param = &#91;
&#91;pawn, 101, 1&#93;,
&#91;knight, 300, 0&#93;,
&#91;bishop, 300, 0&#93;
&#93;``````
After 1 param cycle you may get for example.

Code: Select all

``````param = &#91;
&#91;pawn, 101, 1&#93;,
&#91;knight, 300, 0&#93;,
&#91;bishop, 301, 1&#93;
&#93;
``````
That means the trials with knight value changes does not improve the E.

So instead of pawn, knight bishop sequence, you change it to pawn, bishop, knight as in.

Code: Select all

``````param = &#91;
&#91;pawn, 100, 1&#93;,
&#91;bishop,301, 1&#93;,
&#91;knight, 300, 0&#93;
&#93;
``````
The idea is to test first those param that are sensitive to the training sets that you use. This would allow those param to find its optimal values as early as possible, the knight can be tested late as perhaps its value seemed close to optimal already.

You may try GA too it is interesting.

http://talkchess.com/forum/viewtopic.ph ... 37&t=57246

Cheney
Posts: 104
Joined: Thu Sep 27, 2012 12:24 am

### Re: Texel tuning method question

I will certainly try this out as well to see if this helps but for some reason, I am just not getting positive results with this. I am thinking I am missing something. I reread all the posts to this thread. Would someone mind shedding some light on my scenario? I am sorry if some of this is repetitive.

What I have read about is some use "end of PV" nodes, use quiet nodes, comments about not running qsearch and just using the eval, and comments about comparing my results against results of a better engine. All of this confuses me and makes me think my process is wrong.

Based on what I have read on the process on how to test, this is what I know and have done:

First, My qsearch is a plain fail soft - no delta pruning, no TT, etc.

I have engine version 1 and version 2. They played 64k games. Version 2 is a bit better than version 1 as it has the pawn parameters I wish to test and tune.

From these games, I extracted 8+ million positions, excluded opening book positions and positions when mate was found. The fen was saved with the game result (0 for black won that game, 0.5 for draw, 1 for white won).

I had the engine compute a K value. All the engine did was load each FEN (and the game result), run qsearch, get the returned score, store it and the result into a vector, and once all fens were searched, I calculated a K with minimal E.

So far, at no point did I compare anything with the real scores from the real games that were played or any other scores from other engines.

Next, I used the gradient search (local search seems to be another name used) to tune a few values.

First, with K set (as discovered in a previous paragraph), I'd qsearch all the positions again with all the eval parameters at their "base" settings. The E determined here is now the best E.

Second, I'd have eval-param1 increased by 1, qsearch all the positions, and using all the new returned scores and the known K, I could determine a new E and if it is better than the best E.

Third, based on if a new best E was found or not, I'd either decrease eval-param1 or move onto the next eval-param while saving the new best E.

This would repeat until all parameters were tested in a round and there was no improvement on E. The final tuned parameters, I'd then set into the eval function as the new base/default values and then play the engine against a previous version but do not have a winning engine.

That's the process. Am I missing something? Should I compute K from the scores from the real games first? Should I only tune parameters when my engine loses to the older version? Etc.?

Thank you