Texel tuning method question

Discussion of chess software programming and technical issues.

Moderators: hgm, Harvey Williamson, bob

Forum rules
This textbox is used to restore diagrams posted with the [d] tag before the upgrade.
Ferdy
Posts: 3620
Joined: Sun Aug 10, 2008 1:15 pm
Location: Philippines

Re: Texel tuning method question

Post by Ferdy » Thu Jul 20, 2017 6:05 am

Sample basic Texel tuning code in python.

Tune.py

Code: Select all

"""
Sample Texel tuning in Python

Tested on python 3.6

"""


import sys
import random
from copy import deepcopy
import time


MAXBESTE = 1000.0


def DoQSearchAndGetE(K):
    """ Just return random for now """
    time.sleep(0.5) # Sleep for 0.5s for simulation purposes
    return random.randint(14400, 14600)/100000.0


def SendParamToEngine(param):
    """ Set the param to the engine, pass for now """
    pass
	

def Tune(paramToOptimize, initBestE, delta, K, iLog, cycleLimit):
    """ paramToOptimize is a list of list
        initBestE is float
        delta is a list
    """
    goodCnt = 1
    goodCntOld = 0
    lastBestE = initBestE
    lastBestParam = []
    
    for g in range(cycleLimit): # g = 0, 1, 2, ... cycleLimit-1
        if iLog:
            print('Tune >> param cycle: %d' %(g+1))

        # Exit tuning if goodCnt is not increased
        if goodCnt <= goodCntOld&#58;
            if iLog&#58;
                print&#40;'Tune >> bestE has not been improved, exiting the tuning now ...')
            break
        goodCntOld = goodCnt

        if len&#40;lastBestParam&#41;&#58;
            paramToOptimize = deepcopy&#40;lastBestParam&#41;

        for p in paramToOptimize&#58;
            pName = p&#91;0&#93;
            pValue = p&#91;1&#93;
            
            for d in delta&#58;
                dValue = pValue + d
                if iLog&#58;
                    print&#40;'Tune >> param to optimize&#58; %s' %&#40;pName&#41;)
                    print&#40;'Tune >> delta&#58; %+d' %&#40;d&#41;)
                
                # Create a new param set instead of paramToOptimize.
                # This is the set that will be sent to the engine.
                paramToBeTested = &#91;&#93; 
                
                for a in paramToOptimize&#58;
                    if a&#91;0&#93; == pName&#58;
                        a&#91;1&#93; = dValue
                    paramToBeTested.append&#40;&#91;a&#91;0&#93;, a&#91;1&#93;&#93;)
                        
                if iLog&#58;
                    print&#40;'Tune >> paramSet to try&#58; %s' %&#40;paramToBeTested&#41;)
                    print&#40;'Tune >> Send this set to engine')
                        
                # Send param values to engine
                SendParamToEngine&#40;paramToBeTested&#41;

                if iLog&#58;
                    print&#40;'Tune >> lastBestE&#58; %0.5f' %&#40;lastBestE&#41;)
                    print&#40;'Tune >> Calculate E')

                E = DoQSearchAndGetE&#40;K&#41;

                if iLog&#58;
                    print&#40;'Tune >> CalculatedE&#58; %0.5f' %&#40;E&#41;)
                
                if E < lastBestE&#58;
                    goodCnt += 1
                    lastBestE = E

                    if iLog&#58;
                        print&#40;'Tune >> NewBestE&#58; %0.5f' %&#40;lastBestE&#41;)
                        print&#40;'Tune >> CalculatedE is good -------- !!\n')
                        
                    lastBestParam = deepcopy&#40;paramToBeTested&#41;
                    
                    # Get out of 'for delta' and try the next p
                    break
                else&#58;
                    if iLog&#58;
                        print&#40;'Tune >> CalculatedE is not good ----- ?\n')

        # Log if wer have reached cycle limit
        if g == cycleLimit-1&#58;
            if iLog&#58;
                print&#40;'Tune >> param cycle limit has been reached, exiting tuning now ...')

    return lastBestE, lastBestParam
		

def main&#40;argv&#41;&#58;
        
    bestK = 1.13
    bestESTart = MAXBESTE
    enableLog = True
    paramCycleLimit = 1000

    paramToOptimize = &#91;
        &#91;'pawn', 100&#93;,
        &#91;'knight', 300&#93;,
        &#91;'bishop', 300&#93;
    &#93;
    delta = &#91;+1, -1&#93;

    # Show init values
    print&#40;'origBestE       &#58; %0.5f' %&#40;bestESTart&#41;)
    print&#40;'origParam       &#58; %s' %&#40;paramToOptimize&#41;)
    print&#40;'bestK           &#58; %0.3f' %&#40;bestK&#41;)
    print&#40;'delta           &#58; %s' %&#40;delta&#41;)
    print&#40;'paramCycleLimit &#58; %d' %&#40;paramCycleLimit&#41;)
    print&#40;'EnableLogging   &#58; %s\n' %('On' if enableLog else 'Off'))

    t1 = time.clock&#40;)

    # Run the tuner  
    optiE, optiParam = Tune&#40;paramToOptimize, bestESTart, delta, bestK, enableLog, paramCycleLimit&#41;

    t2 = time.clock&#40;)

    print&#40;'\nbestE   &#58; %0.5f' %&#40;optiE&#41;)
    print&#40;'bestParam &#58; %s' %&#40;optiParam&#41;)
    print&#40;'Elapsed   &#58; %ds' %&#40;t2-t1&#41;)


if __name__ == "__main__"&#58;
    main&#40;sys.argv&#91;1&#58;&#93;)
Calculation of E is done by random number for simulation purposes.
Sample output:

Code: Select all

origBestE       &#58; 1000.00000
origParam       &#58; &#91;&#91;'pawn', 100&#93;, &#91;'knight', 300&#93;, &#91;'bishop', 300&#93;&#93;
bestK           &#58; 1.130
delta           &#58; &#91;1, -1&#93;
paramCycleLimit &#58; 1000
EnableLogging   &#58; On

Tune >> param cycle&#58; 1
Tune >> param to optimize&#58; pawn
Tune >> delta&#58; +1
Tune >> paramSet to try&#58; &#91;&#91;'pawn', 101&#93;, &#91;'knight', 300&#93;, &#91;'bishop', 300&#93;&#93;
Tune >> Send this set to engine
Tune >> lastBestE&#58; 1000.00000
Tune >> Calculate E
Tune >> CalculatedE&#58; 0.14514
Tune >> NewBestE&#58; 0.14514
Tune >> CalculatedE is good -------- !!

Tune >> param to optimize&#58; knight
Tune >> delta&#58; +1
Tune >> paramSet to try&#58; &#91;&#91;'pawn', 101&#93;, &#91;'knight', 301&#93;, &#91;'bishop', 300&#93;&#93;
Tune >> Send this set to engine
Tune >> lastBestE&#58; 0.14514
Tune >> Calculate E
Tune >> CalculatedE&#58; 0.14517
Tune >> CalculatedE is not good ----- ?

Tune >> param to optimize&#58; knight
Tune >> delta&#58; -1
Tune >> paramSet to try&#58; &#91;&#91;'pawn', 101&#93;, &#91;'knight', 299&#93;, &#91;'bishop', 300&#93;&#93;
Tune >> Send this set to engine
Tune >> lastBestE&#58; 0.14514
Tune >> Calculate E
Tune >> CalculatedE&#58; 0.14462
Tune >> NewBestE&#58; 0.14462
Tune >> CalculatedE is good -------- !!

Tune >> param to optimize&#58; bishop
Tune >> delta&#58; +1
Tune >> paramSet to try&#58; &#91;&#91;'pawn', 101&#93;, &#91;'knight', 299&#93;, &#91;'bishop', 301&#93;&#93;
Tune >> Send this set to engine
Tune >> lastBestE&#58; 0.14462
Tune >> Calculate E
Tune >> CalculatedE&#58; 0.14560
Tune >> CalculatedE is not good ----- ?

Tune >> param to optimize&#58; bishop
Tune >> delta&#58; -1
Tune >> paramSet to try&#58; &#91;&#91;'pawn', 101&#93;, &#91;'knight', 299&#93;, &#91;'bishop', 299&#93;&#93;
Tune >> Send this set to engine
Tune >> lastBestE&#58; 0.14462
Tune >> Calculate E
Tune >> CalculatedE&#58; 0.14596
Tune >> CalculatedE is not good ----- ?

Tune >> param cycle&#58; 2
Tune >> param to optimize&#58; pawn
Tune >> delta&#58; +1
Tune >> paramSet to try&#58; &#91;&#91;'pawn', 102&#93;, &#91;'knight', 299&#93;, &#91;'bishop', 300&#93;&#93;
Tune >> Send this set to engine
Tune >> lastBestE&#58; 0.14462
Tune >> Calculate E
Tune >> CalculatedE&#58; 0.14543
Tune >> CalculatedE is not good ----- ?

Tune >> param to optimize&#58; pawn
Tune >> delta&#58; -1
Tune >> paramSet to try&#58; &#91;&#91;'pawn', 100&#93;, &#91;'knight', 299&#93;, &#91;'bishop', 300&#93;&#93;
Tune >> Send this set to engine
Tune >> lastBestE&#58; 0.14462
Tune >> Calculate E
Tune >> CalculatedE&#58; 0.14435
Tune >> NewBestE&#58; 0.14435
Tune >> CalculatedE is good -------- !!

Tune >> param to optimize&#58; knight
Tune >> delta&#58; +1
Tune >> paramSet to try&#58; &#91;&#91;'pawn', 100&#93;, &#91;'knight', 300&#93;, &#91;'bishop', 300&#93;&#93;
Tune >> Send this set to engine
Tune >> lastBestE&#58; 0.14435
Tune >> Calculate E
Tune >> CalculatedE&#58; 0.14496
Tune >> CalculatedE is not good ----- ?

Tune >> param to optimize&#58; knight
Tune >> delta&#58; -1
Tune >> paramSet to try&#58; &#91;&#91;'pawn', 100&#93;, &#91;'knight', 298&#93;, &#91;'bishop', 300&#93;&#93;
Tune >> Send this set to engine
Tune >> lastBestE&#58; 0.14435
Tune >> Calculate E
Tune >> CalculatedE&#58; 0.14599
Tune >> CalculatedE is not good ----- ?

Tune >> param to optimize&#58; bishop
Tune >> delta&#58; +1
Tune >> paramSet to try&#58; &#91;&#91;'pawn', 100&#93;, &#91;'knight', 298&#93;, &#91;'bishop', 301&#93;&#93;
Tune >> Send this set to engine
Tune >> lastBestE&#58; 0.14435
Tune >> Calculate E
Tune >> CalculatedE&#58; 0.14575
Tune >> CalculatedE is not good ----- ?

Tune >> param to optimize&#58; bishop
Tune >> delta&#58; -1
Tune >> paramSet to try&#58; &#91;&#91;'pawn', 100&#93;, &#91;'knight', 298&#93;, &#91;'bishop', 299&#93;&#93;
Tune >> Send this set to engine
Tune >> lastBestE&#58; 0.14435
Tune >> Calculate E
Tune >> CalculatedE&#58; 0.14457
Tune >> CalculatedE is not good ----- ?

Tune >> param cycle&#58; 3
Tune >> param to optimize&#58; pawn
Tune >> delta&#58; +1
Tune >> paramSet to try&#58; &#91;&#91;'pawn', 101&#93;, &#91;'knight', 299&#93;, &#91;'bishop', 300&#93;&#93;
Tune >> Send this set to engine
Tune >> lastBestE&#58; 0.14435
Tune >> Calculate E
Tune >> CalculatedE&#58; 0.14533
Tune >> CalculatedE is not good ----- ?

Tune >> param to optimize&#58; pawn
Tune >> delta&#58; -1
Tune >> paramSet to try&#58; &#91;&#91;'pawn', 99&#93;, &#91;'knight', 299&#93;, &#91;'bishop', 300&#93;&#93;
Tune >> Send this set to engine
Tune >> lastBestE&#58; 0.14435
Tune >> Calculate E
Tune >> CalculatedE&#58; 0.14448
Tune >> CalculatedE is not good ----- ?

Tune >> param to optimize&#58; knight
Tune >> delta&#58; +1
Tune >> paramSet to try&#58; &#91;&#91;'pawn', 99&#93;, &#91;'knight', 300&#93;, &#91;'bishop', 300&#93;&#93;
Tune >> Send this set to engine
Tune >> lastBestE&#58; 0.14435
Tune >> Calculate E
Tune >> CalculatedE&#58; 0.14555
Tune >> CalculatedE is not good ----- ?

Tune >> param to optimize&#58; knight
Tune >> delta&#58; -1
Tune >> paramSet to try&#58; &#91;&#91;'pawn', 99&#93;, &#91;'knight', 298&#93;, &#91;'bishop', 300&#93;&#93;
Tune >> Send this set to engine
Tune >> lastBestE&#58; 0.14435
Tune >> Calculate E
Tune >> CalculatedE&#58; 0.14553
Tune >> CalculatedE is not good ----- ?

Tune >> param to optimize&#58; bishop
Tune >> delta&#58; +1
Tune >> paramSet to try&#58; &#91;&#91;'pawn', 99&#93;, &#91;'knight', 298&#93;, &#91;'bishop', 301&#93;&#93;
Tune >> Send this set to engine
Tune >> lastBestE&#58; 0.14435
Tune >> Calculate E
Tune >> CalculatedE&#58; 0.14553
Tune >> CalculatedE is not good ----- ?

Tune >> param to optimize&#58; bishop
Tune >> delta&#58; -1
Tune >> paramSet to try&#58; &#91;&#91;'pawn', 99&#93;, &#91;'knight', 298&#93;, &#91;'bishop', 299&#93;&#93;
Tune >> Send this set to engine
Tune >> lastBestE&#58; 0.14435
Tune >> Calculate E
Tune >> CalculatedE&#58; 0.14419
Tune >> NewBestE&#58; 0.14419
Tune >> CalculatedE is good -------- !!

Tune >> param cycle&#58; 4
Tune >> param to optimize&#58; pawn
Tune >> delta&#58; +1
Tune >> paramSet to try&#58; &#91;&#91;'pawn', 100&#93;, &#91;'knight', 298&#93;, &#91;'bishop', 299&#93;&#93;
Tune >> Send this set to engine
Tune >> lastBestE&#58; 0.14419
Tune >> Calculate E
Tune >> CalculatedE&#58; 0.14416
Tune >> NewBestE&#58; 0.14416
Tune >> CalculatedE is good -------- !!

Tune >> param to optimize&#58; knight
Tune >> delta&#58; +1
Tune >> paramSet to try&#58; &#91;&#91;'pawn', 100&#93;, &#91;'knight', 299&#93;, &#91;'bishop', 299&#93;&#93;
Tune >> Send this set to engine
Tune >> lastBestE&#58; 0.14416
Tune >> Calculate E
Tune >> CalculatedE&#58; 0.14444
Tune >> CalculatedE is not good ----- ?

Tune >> param to optimize&#58; knight
Tune >> delta&#58; -1
Tune >> paramSet to try&#58; &#91;&#91;'pawn', 100&#93;, &#91;'knight', 297&#93;, &#91;'bishop', 299&#93;&#93;
Tune >> Send this set to engine
Tune >> lastBestE&#58; 0.14416
Tune >> Calculate E
Tune >> CalculatedE&#58; 0.14539
Tune >> CalculatedE is not good ----- ?

Tune >> param to optimize&#58; bishop
Tune >> delta&#58; +1
Tune >> paramSet to try&#58; &#91;&#91;'pawn', 100&#93;, &#91;'knight', 297&#93;, &#91;'bishop', 300&#93;&#93;
Tune >> Send this set to engine
Tune >> lastBestE&#58; 0.14416
Tune >> Calculate E
Tune >> CalculatedE&#58; 0.14510
Tune >> CalculatedE is not good ----- ?

Tune >> param to optimize&#58; bishop
Tune >> delta&#58; -1
Tune >> paramSet to try&#58; &#91;&#91;'pawn', 100&#93;, &#91;'knight', 297&#93;, &#91;'bishop', 298&#93;&#93;
Tune >> Send this set to engine
Tune >> lastBestE&#58; 0.14416
Tune >> Calculate E
Tune >> CalculatedE&#58; 0.14425
Tune >> CalculatedE is not good ----- ?

Tune >> param cycle&#58; 5
Tune >> param to optimize&#58; pawn
Tune >> delta&#58; +1
Tune >> paramSet to try&#58; &#91;&#91;'pawn', 101&#93;, &#91;'knight', 298&#93;, &#91;'bishop', 299&#93;&#93;
Tune >> Send this set to engine
Tune >> lastBestE&#58; 0.14416
Tune >> Calculate E
Tune >> CalculatedE&#58; 0.14557
Tune >> CalculatedE is not good ----- ?

Tune >> param to optimize&#58; pawn
Tune >> delta&#58; -1
Tune >> paramSet to try&#58; &#91;&#91;'pawn', 99&#93;, &#91;'knight', 298&#93;, &#91;'bishop', 299&#93;&#93;
Tune >> Send this set to engine
Tune >> lastBestE&#58; 0.14416
Tune >> Calculate E
Tune >> CalculatedE&#58; 0.14561
Tune >> CalculatedE is not good ----- ?

Tune >> param to optimize&#58; knight
Tune >> delta&#58; +1
Tune >> paramSet to try&#58; &#91;&#91;'pawn', 99&#93;, &#91;'knight', 299&#93;, &#91;'bishop', 299&#93;&#93;
Tune >> Send this set to engine
Tune >> lastBestE&#58; 0.14416
Tune >> Calculate E
Tune >> CalculatedE&#58; 0.14535
Tune >> CalculatedE is not good ----- ?

Tune >> param to optimize&#58; knight
Tune >> delta&#58; -1
Tune >> paramSet to try&#58; &#91;&#91;'pawn', 99&#93;, &#91;'knight', 297&#93;, &#91;'bishop', 299&#93;&#93;
Tune >> Send this set to engine
Tune >> lastBestE&#58; 0.14416
Tune >> Calculate E
Tune >> CalculatedE&#58; 0.14404
Tune >> NewBestE&#58; 0.14404
Tune >> CalculatedE is good -------- !!

Tune >> param to optimize&#58; bishop
Tune >> delta&#58; +1
Tune >> paramSet to try&#58; &#91;&#91;'pawn', 99&#93;, &#91;'knight', 297&#93;, &#91;'bishop', 300&#93;&#93;
Tune >> Send this set to engine
Tune >> lastBestE&#58; 0.14404
Tune >> Calculate E
Tune >> CalculatedE&#58; 0.14599
Tune >> CalculatedE is not good ----- ?

Tune >> param to optimize&#58; bishop
Tune >> delta&#58; -1
Tune >> paramSet to try&#58; &#91;&#91;'pawn', 99&#93;, &#91;'knight', 297&#93;, &#91;'bishop', 298&#93;&#93;
Tune >> Send this set to engine
Tune >> lastBestE&#58; 0.14404
Tune >> Calculate E
Tune >> CalculatedE&#58; 0.14518
Tune >> CalculatedE is not good ----- ?

Tune >> param cycle&#58; 6
Tune >> param to optimize&#58; pawn
Tune >> delta&#58; +1
Tune >> paramSet to try&#58; &#91;&#91;'pawn', 100&#93;, &#91;'knight', 297&#93;, &#91;'bishop', 299&#93;&#93;
Tune >> Send this set to engine
Tune >> lastBestE&#58; 0.14404
Tune >> Calculate E
Tune >> CalculatedE&#58; 0.14426
Tune >> CalculatedE is not good ----- ?

Tune >> param to optimize&#58; pawn
Tune >> delta&#58; -1
Tune >> paramSet to try&#58; &#91;&#91;'pawn', 98&#93;, &#91;'knight', 297&#93;, &#91;'bishop', 299&#93;&#93;
Tune >> Send this set to engine
Tune >> lastBestE&#58; 0.14404
Tune >> Calculate E
Tune >> CalculatedE&#58; 0.14437
Tune >> CalculatedE is not good ----- ?

Tune >> param to optimize&#58; knight
Tune >> delta&#58; +1
Tune >> paramSet to try&#58; &#91;&#91;'pawn', 98&#93;, &#91;'knight', 298&#93;, &#91;'bishop', 299&#93;&#93;
Tune >> Send this set to engine
Tune >> lastBestE&#58; 0.14404
Tune >> Calculate E
Tune >> CalculatedE&#58; 0.14475
Tune >> CalculatedE is not good ----- ?

Tune >> param to optimize&#58; knight
Tune >> delta&#58; -1
Tune >> paramSet to try&#58; &#91;&#91;'pawn', 98&#93;, &#91;'knight', 296&#93;, &#91;'bishop', 299&#93;&#93;
Tune >> Send this set to engine
Tune >> lastBestE&#58; 0.14404
Tune >> Calculate E
Tune >> CalculatedE&#58; 0.14519
Tune >> CalculatedE is not good ----- ?

Tune >> param to optimize&#58; bishop
Tune >> delta&#58; +1
Tune >> paramSet to try&#58; &#91;&#91;'pawn', 98&#93;, &#91;'knight', 296&#93;, &#91;'bishop', 300&#93;&#93;
Tune >> Send this set to engine
Tune >> lastBestE&#58; 0.14404
Tune >> Calculate E
Tune >> CalculatedE&#58; 0.14522
Tune >> CalculatedE is not good ----- ?

Tune >> param to optimize&#58; bishop
Tune >> delta&#58; -1
Tune >> paramSet to try&#58; &#91;&#91;'pawn', 98&#93;, &#91;'knight', 296&#93;, &#91;'bishop', 298&#93;&#93;
Tune >> Send this set to engine
Tune >> lastBestE&#58; 0.14404
Tune >> Calculate E
Tune >> CalculatedE&#58; 0.14477
Tune >> CalculatedE is not good ----- ?

Tune >> param cycle&#58; 7
Tune >> bestE has not been improved, exiting the tuning now ...

bestE   &#58; 0.14404
bestParam &#58; &#91;&#91;'pawn', 99&#93;, &#91;'knight', 297&#93;, &#91;'bishop', 299&#93;&#93;
Elapsed   &#58; 18s
So you get your best param and best error
bestE : 0.14404
bestParam : [['pawn', 99], ['knight', 297], ['bishop', 299]]

In certain situation you can stop the tuning and record the best so far.
Remember the best error and param.
When you resume the tuning, you can now use the last best tuning info.
So your next input run for example would be:

Code: Select all

bestESTart =0.14404 
paramToOptimize = &#91; 
        &#91;'pawn', 99&#93;, 
        &#91;'knight', 297&#93;, 
        &#91;'bishop', 299&#93; 
&#93;
At a point where there is successful reduction of error like,

Code: Select all

Tune >> param cycle&#58; 4
Tune >> param to optimize&#58; pawn
Tune >> delta&#58; +1
Tune >> paramSet to try&#58; &#91;&#91;'pawn', 100&#93;, &#91;'knight', 298&#93;, &#91;'bishop', 299&#93;&#93;
Tune >> Send this set to engine
Tune >> lastBestE&#58; 0.14419
Tune >> Calculate E
Tune >> CalculatedE&#58; 0.14416
Tune >> NewBestE&#58; 0.14416
Tune >> CalculatedE is good -------- !!
Remember the param tuned:
[['pawn', 100], ['knight', 298], ['bishop', 299]]
You can use that to test your engine in actual matches, this is called sampling. It can happen that those values may actually perform in actual game test.

It is good to experiment varied delta array like,

Code: Select all

delta = &#91;+1, -1, +2, -2, +3, -3&#93;
Expand the param if you like.

Code: Select all

paramToOptimize = &#91; 
        &#91;'pawn', 100&#93;, 
        &#91;'knight', 300&#93;, 
        &#91;'bishop', 300&#93;,
        &#91;'rook', 500&#93;,
        &#91;'queen', 1000&#93; 
&#93;

Ferdy
Posts: 3620
Joined: Sun Aug 10, 2008 1:15 pm
Location: Philippines

Code bug fix

Post by Ferdy » Thu Jul 20, 2017 6:29 am

Tune.py

Code: Select all

"""
Sample Texel tuning in Python

Tested on python 3.6

"""


import sys
import random
from copy import deepcopy
import time


MAXBESTE = 1000.0


def DoQSearchAndGetE&#40;K&#41;&#58;
    """ Just return random for now """
    time.sleep&#40;0.5&#41; # Sleep for 0.5s for simulation purposes
    return random.randint&#40;14400, 14600&#41;/100000.0


def SendParamToEngine&#40;param&#41;&#58;
    """ Set the param to the engine, pass for now """
    pass
	

def Tune&#40;paramToOptimize, initBestE, delta, K, iLog, cycleLimit&#41;&#58;
    """ paramToOptimize is a list of list
        initBestE is float
        delta is a list
    """
    goodCnt = 1
    goodCntOld = 0
    lastBestE = initBestE
    lastBestParam = &#91;&#93;
    
    for g in range&#40;cycleLimit&#41;&#58; # g = 0, 1, 2, ... cycleLimit-1
        if iLog&#58;
            print&#40;'Tune >> param cycle&#58; %d' %&#40;g+1&#41;)

        # Exit tuning if goodCnt is not increased
        if goodCnt <= goodCntOld&#58;
            if iLog&#58;
                print&#40;'Tune >> bestE has not been improved, exiting the tuning now ...')
            break
        goodCntOld = goodCnt

        # Update paramToOptimize with lastBestParam
        if len&#40;lastBestParam&#41;&#58;
            paramToOptimize = deepcopy&#40;lastBestParam&#41;

        for p in paramToOptimize&#58;
            pName = p&#91;0&#93;
            pValue = p&#91;1&#93;
            
            for d in delta&#58;
                dValue = pValue + d
                if iLog&#58;
                    print&#40;'Tune >> param to optimize&#58; %s' %&#40;pName&#41;)
                    print&#40;'Tune >> delta&#58; %+d' %&#40;d&#41;)

                # Update paramToOptimize with lastBestParam
                if len&#40;lastBestParam&#41;&#58;
                    paramToOptimize = deepcopy&#40;lastBestParam&#41;
                
                # Create a new param set instead of paramToOptimize.
                # This is the set that will be sent to the engine.
                paramToBeTested = &#91;&#93; 
                
                for a in paramToOptimize&#58;
                    if a&#91;0&#93; == pName&#58;
                        a&#91;1&#93; = dValue
                    paramToBeTested.append&#40;&#91;a&#91;0&#93;, a&#91;1&#93;&#93;)
                        
                if iLog&#58;
                    print&#40;'Tune >> paramSet to try&#58; %s' %&#40;paramToBeTested&#41;)
                    print&#40;'Tune >> Send this set to engine')
                        
                # Send param values to engine
                SendParamToEngine&#40;paramToBeTested&#41;

                if iLog&#58;
                    print&#40;'Tune >> lastBestE&#58; %0.5f' %&#40;lastBestE&#41;)
                    print&#40;'Tune >> Calculate E')

                E = DoQSearchAndGetE&#40;K&#41;

                if iLog&#58;
                    print&#40;'Tune >> CalculatedE&#58; %0.5f' %&#40;E&#41;)
                
                if E < lastBestE&#58;
                    goodCnt += 1
                    lastBestE = E

                    if iLog&#58;
                        print&#40;'Tune >> NewBestE&#58; %0.5f' %&#40;lastBestE&#41;)
                        print&#40;'Tune >> CalculatedE is good -------- !!\n')
                        
                    lastBestParam = deepcopy&#40;paramToBeTested&#41;
                    
                    # Get out of 'for delta' and try the next p
                    break
                else&#58;
                    if iLog&#58;
                        print&#40;'Tune >> CalculatedE is not good ----- ?\n')

        # Log if wer have reached cycle limit
        if g == cycleLimit-1&#58;
            if iLog&#58;
                print&#40;'Tune >> param cycle limit has been reached, exiting tuning now ...')

    return lastBestE, lastBestParam
		

def main&#40;argv&#41;&#58;
        
    bestK = 1.13
    bestESTart = MAXBESTE
    enableLog = True
    paramCycleLimit = 1000

    paramToOptimize = &#91;
        &#91;'pawn', 100&#93;,
        &#91;'knight', 300&#93;,
        &#91;'bishop', 300&#93;
    &#93;
    delta = &#91;+1, -1&#93;

    # Show init values
    print&#40;'origBestE       &#58; %0.5f' %&#40;bestESTart&#41;)
    print&#40;'origParam       &#58; %s' %&#40;paramToOptimize&#41;)
    print&#40;'bestK           &#58; %0.3f' %&#40;bestK&#41;)
    print&#40;'delta           &#58; %s' %&#40;delta&#41;)
    print&#40;'paramCycleLimit &#58; %d' %&#40;paramCycleLimit&#41;)
    print&#40;'EnableLogging   &#58; %s\n' %('On' if enableLog else 'Off'))

    t1 = time.clock&#40;)

    # Run the tuner  
    optiE, optiParam = Tune&#40;paramToOptimize, bestESTart, delta, bestK, enableLog, paramCycleLimit&#41;

    t2 = time.clock&#40;)

    print&#40;'\nbestE   &#58; %0.5f' %&#40;optiE&#41;)
    print&#40;'bestParam &#58; %s' %&#40;optiParam&#41;)
    print&#40;'Elapsed   &#58; %ds' %&#40;t2-t1&#41;)


if __name__ == "__main__"&#58;
    main&#40;sys.argv&#91;1&#58;&#93;)
Added:

Code: Select all

# Update paramToOptimize with lastBestParam
if len&#40;lastBestParam&#41;&#58;
    paramToOptimize = deepcopy&#40;lastBestParam&#41;
In
for d in delta:
dValue = pValue + d
if iLog:
print('Tune >> param to optimize: %s' %(pName))
print('Tune >> delta: %+d' %(d))

# Update paramToOptimize with lastBestParam
if len(lastBestParam):
paramToOptimize = deepcopy(lastBestParam)


# Create a new param set instead of paramToOptimize.
# This is the set that will be sent to the engine.
paramToBeTested = []

Cheney
Posts: 83
Joined: Thu Sep 27, 2012 12:24 am

Re: Texel tuning method question

Post by Cheney » Sat Jul 22, 2017 4:23 pm

Yes, a brute force and I know that would take a long time :). However, your explanation of the gradient descent was spot on!

I started to use that concept and after some time, I have some tuned numbers. I basically am testing with just about 3M positions now and have eight parameters I want to play with (all pawn items, like isolated, doubled, backwards, and chained for mg and eg). I know the values I currently have are good, so I let the tuning process start with those. To "walk" the parameters, I'd increase the first by one, test, if E is not lower, then I'd decrease the original value by 1. I'd then repeat this for the next parameter and repeat until there are no improvements and I'd record the improvements. A question on this is if I can walk one parameter until it no longer improves and then walk the next one? I figure there would be a point where original parameters are now no longer optimal. I need to test this :)

For some of the parameters, they stay close to the values I originally hand tuned while others go somewhere I did not expect :) - e.g., chained pawn in mg is a negative "bonus".

I tested the new values by having that engine play the engine that was hand tuned - the auto-tuned loses but only by a few ELO over 10K games.

I then zeroed out all my pawn parameters and started over. The new auto tuned values are still close to the previous auto tuned values (one might be off by +/- 1). The interesting thing is I had all three engines play - one with 0 values, one with hand tuned values, and one with auto tuned values.

The auto-tuned loses consistently while my hand tuned wins consistently.

Now maybe this just won't work for me, or I am doing something wrong or am missing the point. I'm not sure where to go with this. I was thinking of extracting positions from GM games or from games pulled from CCRL and testing to find out what happens. I am still working on this ...

Thanks again!

Cheney
Posts: 83
Joined: Thu Sep 27, 2012 12:24 am

Re: Code bug fix

Post by Cheney » Sat Jul 22, 2017 4:28 pm

Thank you :). I have not coded much with python but maybe this will help me start.

I have coded my tuning within the engine source. Based on your output sample, it looks like I am walking the parameters properly... test param1 + 1, look for better E, if not then test param - 1. I do have a question about this but have posted it in my other post from today.

jdart
Posts: 3585
Joined: Fri Mar 10, 2006 4:23 am
Location: http://www.arasanchess.org

Re: Texel tuning method question

Post by jdart » Sun Jul 23, 2017 9:18 pm

Typically with gradient descent you tune all parameters simultaneously.

The better methods for this use a variable step size, one that decreases as the method approaches convergence.

This has been an area in which there have been some fairly recent developments. See http://ruder.io/optimizing-gradient-descent/ for an overview.

A popular method is ADAM. That is what I use.

--Jon

AlvaroBegue
Posts: 895
Joined: Tue Mar 09, 2010 2:46 pm
Location: New York
Full name: Álvaro Begué (RuyDos)

Re: Texel tuning method question

Post by AlvaroBegue » Sun Jul 23, 2017 9:26 pm

jdart wrote:Typically with gradient descent you tune all parameters simultaneously.

The better methods for this use a variable step size, one that decreases as the method approaches convergence.

This has been an area in which there have been some fairly recent developments. See http://ruder.io/optimizing-gradient-descent/ for an overview.

A popular method is ADAM. That is what I use.

--Jon
One important observation is that you don't need to evaluate the error on your entire training dataset before you start changing your parameters. Instead, you can just look at a small random subsample of your training dataset (say, 100 positions) and change the parameters a little bit to reduce the error when evaluated in that subset. This is called stochastic gradient descent with mini-batches. There are many learning algorithms (procedures to decide how to change the parameters in each step after you evaluate the gradient). Adam is a reasonable choice.

If you were using batch gradient descent (i.e., evaluating the gradient over the entire training set at each step), there are other options that might be better. In RuyTune I used L-BFGS, but not I am not convinced that was a good choice. In any case, I would recommend using mini-batches for most situations.

Ferdy
Posts: 3620
Joined: Sun Aug 10, 2008 1:15 pm
Location: Philippines

Re: Texel tuning method question

Post by Ferdy » Mon Jul 24, 2017 3:47 am

Cheney wrote:A question on this is if I can walk one parameter until it no longer improves and then walk the next one?
In doing so you probably deprived the other parameters to achieve their optimal values. This is just the same as tuning the parameter one by one :).
Peter had mentioned and I use it in tuning to sort the parameter according to the number of successful reduction of Error. Example:

Code: Select all

c = 0
param = &#91;
&#91;pawn, 100, c&#93;,
&#91;knight, 300, c&#93;,
&#91;bishop, 300, c&#93;
&#93;
Change pawn to 101, and if it is successful, increment the counter c.
That would become,

Code: Select all

param = &#91;
&#91;pawn, 101, 1&#93;,
&#91;knight, 300, 0&#93;,
&#91;bishop, 300, 0&#93;
&#93;
After 1 param cycle you may get for example.

Code: Select all

param = &#91;
&#91;pawn, 101, 1&#93;,
&#91;knight, 300, 0&#93;,
&#91;bishop, 301, 1&#93;
&#93;
That means the trials with knight value changes does not improve the E.

So instead of pawn, knight bishop sequence, you change it to pawn, bishop, knight as in.

Code: Select all

param = &#91;
&#91;pawn, 100, 1&#93;,
&#91;bishop,301, 1&#93;,
&#91;knight, 300, 0&#93;
&#93;
The idea is to test first those param that are sensitive to the training sets that you use. This would allow those param to find its optimal values as early as possible, the knight can be tested late as perhaps its value seemed close to optimal already.


You may try GA too it is interesting.

http://talkchess.com/forum/viewtopic.ph ... 37&t=57246

Cheney
Posts: 83
Joined: Thu Sep 27, 2012 12:24 am

Re: Texel tuning method question

Post by Cheney » Wed Jul 26, 2017 11:15 pm

I will certainly try this out as well to see if this helps but for some reason, I am just not getting positive results with this. I am thinking I am missing something. I reread all the posts to this thread. Would someone mind shedding some light on my scenario? I am sorry if some of this is repetitive.

What I have read about is some use "end of PV" nodes, use quiet nodes, comments about not running qsearch and just using the eval, and comments about comparing my results against results of a better engine. All of this confuses me :) and makes me think my process is wrong.

Based on what I have read on the process on how to test, this is what I know and have done:

First, My qsearch is a plain fail soft - no delta pruning, no TT, etc.

I have engine version 1 and version 2. They played 64k games. Version 2 is a bit better than version 1 as it has the pawn parameters I wish to test and tune.

From these games, I extracted 8+ million positions, excluded opening book positions and positions when mate was found. The fen was saved with the game result (0 for black won that game, 0.5 for draw, 1 for white won).

I had the engine compute a K value. All the engine did was load each FEN (and the game result), run qsearch, get the returned score, store it and the result into a vector, and once all fens were searched, I calculated a K with minimal E.

So far, at no point did I compare anything with the real scores from the real games that were played or any other scores from other engines.

Next, I used the gradient search (local search seems to be another name used) to tune a few values.

First, with K set (as discovered in a previous paragraph), I'd qsearch all the positions again with all the eval parameters at their "base" settings. The E determined here is now the best E.

Second, I'd have eval-param1 increased by 1, qsearch all the positions, and using all the new returned scores and the known K, I could determine a new E and if it is better than the best E.

Third, based on if a new best E was found or not, I'd either decrease eval-param1 or move onto the next eval-param while saving the new best E.

This would repeat until all parameters were tested in a round and there was no improvement on E. The final tuned parameters, I'd then set into the eval function as the new base/default values and then play the engine against a previous version but do not have a winning engine.

That's the process. Am I missing something? Should I compute K from the scores from the real games first? Should I only tune parameters when my engine loses to the older version? Etc.?

Thank you :)

jdart
Posts: 3585
Joined: Fri Mar 10, 2006 4:23 am
Location: http://www.arasanchess.org

Re: Texel tuning method question

Post by jdart » Thu Jul 27, 2017 1:55 am

qsearch is a plain fail soft - no delta pruning, no TT, etc.
Well, delta pruning is probably good for a sizeable ELO gain, right there. So I would do that.

There is no need to tune parameters one at a time and you don't have to limit yourself to tuning a few values. A reasonable optimization algorithm can tune several hundred simultaneously, no problem. Look at Adagrad or ADAM - they are not really hard to code. Github has some example code.

The process I believe should be relatively insensitive to K. K is just determining how scores map to predicted outcomes. I use 0.75 and so the predicted outcome is :

1.0/(1.0+exp(-0.75*val))

where val is the score.

What is your validation procedure? Once you have done a tuning run, you should have a method, self-play or game play against an opponent gauntlet, to validate that changes do, or don't improve strength.

--Jon

Ferdy
Posts: 3620
Joined: Sun Aug 10, 2008 1:15 pm
Location: Philippines

Re: Texel tuning method question

Post by Ferdy » Thu Jul 27, 2017 2:30 am

Cheney wrote:I will certainly try this out as well to see if this helps but for some reason, I am just not getting positive results with this. I am thinking I am missing something. I reread all the posts to this thread. Would someone mind shedding some light on my scenario? I am sorry if some of this is repetitive.
If you cannot get a positive result, that could mean your parameter values are already optimal or there are bugs in the eval (I spent more time reviewing my eval code before running the tuning), or your training sets are not good like in extreme case all wins for side with passer and you are tuning a passer, so every time the position is encountered, increasing the passer bonus is always good. It should be balance by having a passer in the position but the result is draw or a loss.
Cheney wrote:What I have read about is some use "end of PV" nodes, use quiet nodes, comments about not running qsearch and just using the eval, and comments about comparing my results against results of a better engine. All of this confuses me :) and makes me think my process is wrong.
Texel method uses qsearch score. Another method is just using eval score, this is the method that obviously requires quiet training sets.
Cheney wrote:Based on what I have read on the process on how to test, this is what I know and have done:

First, My qsearch is a plain fail soft - no delta pruning, no TT, etc.

I have engine version 1 and version 2. They played 64k games. Version 2 is a bit better than version 1 as it has the pawn parameters I wish to test and tune.

From these games, I extracted 8+ million positions, excluded opening book positions and positions when mate was found. The fen was saved with the game result (0 for black won that game, 0.5 for draw, 1 for white won).

I had the engine compute a K value. All the engine did was load each FEN (and the game result), run qsearch, get the returned score, store it and the result into a vector, and once all fens were searched, I calculated a K with minimal E.

So far, at no point did I compare anything with the real scores from the real games that were played or any other scores from other engines.

Next, I used the gradient search (local search seems to be another name used) to tune a few values.

First, with K set (as discovered in a previous paragraph), I'd qsearch all the positions again with all the eval parameters at their "base" settings. The E determined here is now the best E.

Second, I'd have eval-param1 increased by 1, qsearch all the positions, and using all the new returned scores and the known K, I could determine a new E and if it is better than the best E.

Third, based on if a new best E was found or not, I'd either decrease eval-param1 or move onto the next eval-param while saving the new best E.
If there is new best E, save the param values and move on to the next param and use all best params in the succeeding test. If best E is not reduced, change the delta of the param and test again. If all deltas have been tried move on to the next param of course.
Cheney wrote:Should I only tune parameters when my engine loses to the older version?
Tune parameters when you have changes to eval and search.

Sample development scenario:

Code: Select all

1. Create base engine
   a. Create base-a engine, based from base engine
   b. Test base vs base-a, this would test the reliability of your test environment
       and randomness of the engine.
   c. Make sure that the result is even
   d. If result is even goto &#40;2&#41;
2. Create base-1 engine based from base engine, add bonus x for knight outpost. value x is your first estimate.
3. Test base vs base-1
4. If result is not really good but a small elo reduction or equal
    a. Create base-1a, based from base-1 and tune it.
    b. Tune all/whatever param with your auto-tuner
    c. Apply good param to base-1a
    d. Test base vs base-1a
5. Then next eval modification ...

To test that your auto-tuner is working, set your pawn to 30, knight to 100. Then tune, just this 2 param lets see if it can increase your pawn value
close to 100 and knight value to 300 for example. This would also check the goodness of your training sets used. After tuning test it in actual games.

Post Reply