Hash size, hash fullness, strength

Discussion of chess software programming and technical issues.

Moderators: hgm, Dann Corbit, Harvey Williamson

Forum rules
This textbox is used to restore diagrams posted with the [d] tag before the upgrade.
Post Reply
User avatar
xr_a_y
Posts: 1619
Joined: Sat Nov 25, 2017 1:28 pm
Location: France

Hash size, hash fullness, strength

Post by xr_a_y » Sat Feb 29, 2020 6:40 pm

I'm making a little experiment with hash size in Minic.
I ran a 1min+0.3 tourney (still running) with same Minic using hash size 256 512 2048 and 4096.
It seems strength is somehow proportional to hash size (seems ok to me).

Code: Select all

Rank Name                          Elo     +/-   Games   Score   Draws
   1 minic_1.48_uci4096             19      27     241   52.7%   61.4%
   2 minic_1.48_uci2048              6      27     241   50.8%   61.0%
   3 minic_1.48_uci512               4      29     241   50.6%   57.3%
   4 minic_1.48_uci256             -29      31     241   45.9%   51.0%

482 of 6000 games finished.
If I look at "mean hash fullness" at the end of games I see that :
256 => 50%
512 => 45%
2048 => 40%
4096 => 25%

So although table is not full, some strength is gained from bigger TT.

Minic is currently using a 3 bucket TT, first is always replace, others are replace by depth (no aging currently).

Can I safely conclude that my TT scheme is bad ?

User avatar
hgm
Posts: 26434
Joined: Fri Mar 10, 2006 9:06 am
Location: Amsterdam
Full name: H G Muller
Contact:

Re: Hash size, hash fullness, strength

Post by hgm » Sun Mar 01, 2020 9:24 am

It is a bit suspect that enlarging the TT size has an effect if the TT was already large enough to hold every search tree of the entire game. Because this cannot be explained by the quality of the replacement scheme. With 4096MB hash you should have something loke 256M entries, and even at 1Mnps you would expect to fill only 60M of those in 1 min. Even if all nodes you ever searched would be different.

In my experience the time-to-depth of searches would only start to go up when I shrink the TT size to less than 1/10 of the number of reported nodes. And then it would go up (with a good replacement scheme) only as the inverse 12th root of the size. I.e. the time would double when the TT gets 4096 times smaller. As speed (in %) roughly corresponds with Elo, this means you can expect about 8 Elo from doubling the table size, when the table was heavily overloaded.

To see an effect of the replacement scheme in 1-min games you should set the TT size to 1 MB or smaller...

User avatar
xr_a_y
Posts: 1619
Joined: Sat Nov 25, 2017 1:28 pm
Location: France

Re: Hash size, hash fullness, strength

Post by xr_a_y » Sun Mar 01, 2020 9:54 am

Thanks for the figures.

I must have something wrong because I try to correct from

Code: Select all

void setEntry(ThreadContext & context, Hash h, Move m, ScoreType s, ScoreType eval, Bound b, DepthType d){
    Entry e = {h,m,s,eval,b,d};
    e.h ^= e._d;
    assert(e.h > 0);
    if ( DynamicConfig::disableTT ) return;
    ++context.stats.counters[Stats::sid_ttInsert];
    const size_t index = h&(ttSize-1);
    table[index].e[0] = e; // always replace (favour leaf)
    Entry * toUpdate = &(table[index].e[1]);
    for (unsigned int i = 1 ; i < Bucket::nbBucket ; ++i){ // other replace by depth (and generation ?)
        Entry & curEntry = table[index].e[i];
        if( curEntry.h == 0 ) { break; }; // fill empty entry
        if( curEntry.h == e.h && curEntry.d < e.d ) { toUpdate = &curEntry; break; } // favour high depth or exact bound entry with same hash
        else if( curEntry.d < toUpdate->d /*|| curEntry.generation < toUpdate->generation*/ ) toUpdate = &curEntry; // not same hash, replace the oldest or lowest depth
    }
    *toUpdate = e;
}
to that

Code: Select all

void setEntry(ThreadContext & context, Hash h, Move m, ScoreType s, ScoreType eval, Bound b, DepthType d){
    Entry e = {h,m,s,eval,b,d};
    e.h ^= e._d;
    assert(e.h > 0);
    if ( DynamicConfig::disableTT ) return;
    ++context.stats.counters[Stats::sid_ttInsert];
    const size_t index = h&(ttSize-1);
    table[index].e[0] = e; // always replace (favour leaf)
    Entry * toUpdate = &(table[index].e[1]);
    for (unsigned int i = 1 ; i < Bucket::nbBucket ; ++i){ // other replace by depth (and generation ?)
        Entry & curEntry = table[index].e[i];
        if( curEntry.h == 0 ) { toUpdate = &curEntry; break; }; // fill empty entry
        if( curEntry.h == e.h && curEntry.d < e.d ) { toUpdate = &curEntry; break; } // favour high depth or exact bound entry with same hash
        else if( curEntry.d < toUpdate->d /*|| curEntry.generation < toUpdate->generation*/ ) toUpdate = &curEntry; // not same hash, replace the oldest or lowest depth
    }
    *toUpdate = e;
}
which seems less wrong to me. The first version was updating i-1 bucket when the i was never used. The corrected version use i bucket when is was never used before. This patch is loosing 20 Elo at 1min+0.3 TC :shock:

Am I missing something, my eyes may not see something obvious.

Post Reply