The cost of transposition table instrumentation

Discussion of chess software programming and technical issues.

Moderators: bob, hgm, Harvey Williamson

Forum rules
This textbox is used to restore diagrams posted with the [d] tag before the upgrade.
User avatar
sje
Posts: 4675
Joined: Mon Mar 13, 2006 6:43 pm

The cost of transposition table instrumentation

Post by sje » Sun Aug 23, 2015 7:40 am

Some comments on the cost of transposition table instrumentation to enable statistics generation:

Symbolic has several transposition table classes, each of them having the same base class which contains member variables common to all derived classes.

Code: Select all

class TranBase
{
public:
  TranBase(const TtId table, const ui log2count);
  TranBase(const TtId table, const double frac);

#if (EnableTranStats)
  void IncrMatch(void) {ctspinlock.Lock(); ctmatch++; ctspinlock.Unlock();}
  void IncrProbe(void) {ctspinlock.Lock(); ctprobe++; ctspinlock.Unlock();}
  void IncrStash(void) {ctspinlock.Lock(); ctstash++; ctspinlock.Unlock();}
  void IncrUsage(void) {ctspinlock.Lock(); ctusage++; ctspinlock.Unlock();}
#else
  void IncrMatch(void) {}
  void IncrProbe(void) {}
  void IncrStash(void) {}
  void IncrUsage(void) {}
#endif

  void LsbReporter(const Hash& hash0, const Hash& hash1);

  std::string EncodeStats(void) const;
  std::string Encode(void) const;

protected:
  static const std::string basenames[TtIdLen];
  static const size_t      itemsizes[TtIdLen];

  void SetUpFromLog2(const TtId table, const ui log2count);
  void SetUpFromFrac(const TtId table, const double frac);

  ui CalcSlice(const usize offset) const {return ((ui) (offset >> sliceshift)) & slicemask;}

  void FastReset(void);

  TtId        ttid;
  std::string name;

  ui        log2entrycount;  // Log2 of entry count
  size_t    entrysize;       // Size of a single entry
  size_t    tablesize;       // Size of the entire storage region
  usize     entrycount;      // Number of entries
  usize     addrmask;        // Address generation mask
  ui        sliceshift;      // Slice shift bit count
  usize     slicemask;       // Slice index generation mask

  ui64      ctmatch;         // Count of probe matches
  ui64      ctprobe;         // Count of probe calls
  ui64      ctstash;         // Count of stash calls
  ui64      ctusage;         // Count of used entries
  SpinLock  ctspinlock;      // Count access spinlock

  ui        lsbhwm;          // LSB match high water mark
  SpinLock  lsbspinlock;     // Spinlock for high watermark access

  void      *storeptr;       // Pointer to allocated storage region

  SpinLock SLSliceVec[SLSliceLen];  // Regional access spinlocks
};
Note the four 64 bit counters: ctmatch, ctprobe, ctstash, and ctusage along with their spinlock ctspinlock needed because of multithreaded access.

When a table is created, its constructor resets these members. They stay that way unless the compile time symbol EnableTranStats is set, in which case the counter increment routines are called when appropriate. The member method EncodeStats() presents these in text format when called.

These are handy numbers to have. But their generation does eat some time. How much time? An empirical answer follows.

The task used for the experiment is the calculation of the value of perft(9) of the initial array, which is 2,439,530,234,167. This was done by running four hyperthreads on a dual core 64 bit CPU with a transposition table containing 268,435,456 (2^28) elements with 128 bit signatures.

First, without instrumentation:

Code: Select all

[] pctran 9
TT: PETBase: ItemCount: 268,435,456 (2^28)   ItemSize: 24 B   TableSize: 6 GiB
Count: 2,439,530,234,167   Pt: 32:20.992   Wt: 8:27.289   U: 3.8262   4.80895 GHz   207.946 ps
Total: two trillion four hundred thirty-nine billion five hundred thirty million two hundred thirty-four thousand one hundred sixty-seven
Second, with instrumentation:

Code: Select all

[] pctran 9
TT: PETBase: ItemCount: 268,435,456 (2^28)   ItemSize: 24 B   TableSize: 6 GiB
TT: PETBase: items: 268,435,456   probe: 280,394,218   match: 169,624,952   stash: 111,793,594   usage: 99,677,863   load: 0.371329
Count: 2,439,530,234,167   Pt: 32:46.044   Wt: 8:33.844   U: 3.82615   4.74761 GHz   210.632 ps
Total: two trillion four hundred thirty-nine billion five hundred thirty million two hundred thirty-four thousand one hundred sixty-seven
The result is that the cost of instrumentation in the above is about an additional 1.29% of total processing time.

So now you know.

bob
Posts: 20549
Joined: Mon Feb 27, 2006 6:30 pm
Location: Birmingham, AL

Re: The cost of transposition table instrumentation

Post by bob » Mon Aug 24, 2015 12:32 am

sje wrote:Some comments on the cost of transposition table instrumentation to enable statistics generation:

Symbolic has several transposition table classes, each of them having the same base class which contains member variables common to all derived classes.

Code: Select all

class TranBase
{
public:
  TranBase(const TtId table, const ui log2count);
  TranBase(const TtId table, const double frac);

#if (EnableTranStats)
  void IncrMatch(void) {ctspinlock.Lock(); ctmatch++; ctspinlock.Unlock();}
  void IncrProbe(void) {ctspinlock.Lock(); ctprobe++; ctspinlock.Unlock();}
  void IncrStash(void) {ctspinlock.Lock(); ctstash++; ctspinlock.Unlock();}
  void IncrUsage(void) {ctspinlock.Lock(); ctusage++; ctspinlock.Unlock();}
#else
  void IncrMatch(void) {}
  void IncrProbe(void) {}
  void IncrStash(void) {}
  void IncrUsage(void) {}
#endif

  void LsbReporter(const Hash& hash0, const Hash& hash1);

  std::string EncodeStats(void) const;
  std::string Encode(void) const;

protected:
  static const std::string basenames[TtIdLen];
  static const size_t      itemsizes[TtIdLen];

  void SetUpFromLog2(const TtId table, const ui log2count);
  void SetUpFromFrac(const TtId table, const double frac);

  ui CalcSlice(const usize offset) const {return ((ui) (offset >> sliceshift)) & slicemask;}

  void FastReset(void);

  TtId        ttid;
  std::string name;

  ui        log2entrycount;  // Log2 of entry count
  size_t    entrysize;       // Size of a single entry
  size_t    tablesize;       // Size of the entire storage region
  usize     entrycount;      // Number of entries
  usize     addrmask;        // Address generation mask
  ui        sliceshift;      // Slice shift bit count
  usize     slicemask;       // Slice index generation mask

  ui64      ctmatch;         // Count of probe matches
  ui64      ctprobe;         // Count of probe calls
  ui64      ctstash;         // Count of stash calls
  ui64      ctusage;         // Count of used entries
  SpinLock  ctspinlock;      // Count access spinlock

  ui        lsbhwm;          // LSB match high water mark
  SpinLock  lsbspinlock;     // Spinlock for high watermark access

  void      *storeptr;       // Pointer to allocated storage region

  SpinLock SLSliceVec[SLSliceLen];  // Regional access spinlocks
};
Note the four 64 bit counters: ctmatch, ctprobe, ctstash, and ctusage along with their spinlock ctspinlock needed because of multithreaded access.

When a table is created, its constructor resets these members. They stay that way unless the compile time symbol EnableTranStats is set, in which case the counter increment routines are called when appropriate. The member method EncodeStats() presents these in text format when called.

These are handy numbers to have. But their generation does eat some time. How much time? An empirical answer follows.

The task used for the experiment is the calculation of the value of perft(9) of the initial array, which is 2,439,530,234,167. This was done by running four hyperthreads on a dual core 64 bit CPU with a transposition table containing 268,435,456 (2^28) elements with 128 bit signatures.

First, without instrumentation:

Code: Select all

[] pctran 9
TT: PETBase: ItemCount: 268,435,456 (2^28)   ItemSize: 24 B   TableSize: 6 GiB
Count: 2,439,530,234,167   Pt: 32:20.992   Wt: 8:27.289   U: 3.8262   4.80895 GHz   207.946 ps
Total: two trillion four hundred thirty-nine billion five hundred thirty million two hundred thirty-four thousand one hundred sixty-seven
Second, with instrumentation:

Code: Select all

[] pctran 9
TT: PETBase: ItemCount: 268,435,456 (2^28)   ItemSize: 24 B   TableSize: 6 GiB
TT: PETBase: items: 268,435,456   probe: 280,394,218   match: 169,624,952   stash: 111,793,594   usage: 99,677,863   load: 0.371329
Count: 2,439,530,234,167   Pt: 32:46.044   Wt: 8:33.844   U: 3.82615   4.74761 GHz   210.632 ps
Total: two trillion four hundred thirty-nine billion five hundred thirty million two hundred thirty-four thousand one hundred sixty-seven
The result is that the cost of instrumentation in the above is about an additional 1.29% of total processing time.

So now you know.
I don't spinlock on statistical counters, nor on history counters, nor on countermoves, etc. The probability of an interleaved update is very low, and the consequences are nil (who cares if such a counter is off by one or not). I'd bet most of that 1.29% is in the read-for-ownership cache traffic for acquiring the spin lock. Without the lock I bet you won't see any difference whatsoever.

User avatar
sje
Posts: 4675
Joined: Mon Mar 13, 2006 6:43 pm

Re: The cost of transposition table instrumentation

Post by sje » Mon Aug 24, 2015 2:44 am

bob wrote:
sje wrote:The result is that the cost of instrumentation in the above is about an additional 1.29% of total processing time.
I don't spinlock on statistical counters, nor on history counters, nor on countermoves, etc. The probability of an interleaved update is very low, and the consequences are nil (who cares if such a counter is off by one or not). I'd bet most of that 1.29% is in the read-for-ownership cache traffic for acquiring the spin lock. Without the lock I bet you won't see any difference whatsoever.
The probability of race is low, but increases with the thread count and frequency of access. I like to see accurate counters anyway. Also, how much is a 1.29% speed cost? Maybe two or three Elo points.

On some architectures, spinlock access time for an uncontested lock is very, very small. Note that spinlocks would not be necessary in the above if only 64 bit increments were guaranteed to be atomic. But they aren't, at least on the three different 32 bit CPU kinds I use as targets.

In an earlier version of Symbolic, there were no table counters. Instead, I had the table base class treat every entry with a zero signature as being unused. At the end of a search, a table's usage would be calculated by simply scanning all its entries and return the ratio of the used count divided by the total count. That worked, but was rather slow for large tables.

bob
Posts: 20549
Joined: Mon Feb 27, 2006 6:30 pm
Location: Birmingham, AL

Re: The cost of transposition table instrumentation

Post by bob » Mon Aug 24, 2015 2:50 am

sje wrote:
bob wrote:
sje wrote:The result is that the cost of instrumentation in the above is about an additional 1.29% of total processing time.
I don't spinlock on statistical counters, nor on history counters, nor on countermoves, etc. The probability of an interleaved update is very low, and the consequences are nil (who cares if such a counter is off by one or not). I'd bet most of that 1.29% is in the read-for-ownership cache traffic for acquiring the spin lock. Without the lock I bet you won't see any difference whatsoever.
The probability of race is low, but increases with the thread count and frequency of access. I like to see accurate counters anyway. Also, how much is a 1.29% speed cost? Maybe two or three Elo points.

On some architectures, spinlock access time for an uncontested lock is very, very small. Note that spinlocks would not be necessary in the above if only 64 bit increments were guaranteed to be atomic. But they aren't, at least on the three different 32 bit CPU kinds I use as targets.

In an earlier version of Symbolic, there were no table counters. Instead, I had the table base class treat every entry with a zero signature as being unused. At the end of a search, a table's usage would be calculated by simply scanning all its entries and return the ratio of the used count divided by the total count. That worked, but was rather slow for large tables.
Problem is, it is 1.29% today. What about on 32 or 64 cores? This is more exponential than linear. As far as atomicity goes, 64 bit machines provide such, but that doesn't prevent interleaved update still. Only means the entire 8 bytes is stored in one operation. Those kinds of counters are not going to be memory access anyway, they are going to be handled by inter-cache forwarding. And read-for-ownership is very expensive compared to the usual load/store...

User avatar
sje
Posts: 4675
Joined: Mon Mar 13, 2006 6:43 pm

Re: The cost of transposition table instrumentation

Post by sje » Mon Aug 24, 2015 5:12 am

bob wrote:Problem is, it is 1.29% today. What about on 32 or 64 cores? This is more exponential than linear. As far as atomicity goes, 64 bit machines provide such, but that doesn't prevent interleaved update still. Only means the entire 8 bytes is stored in one operation. Those kinds of counters are not going to be memory access anyway, they are going to be handled by inter-cache forwarding. And read-for-ownership is very expensive compared to the usual load/store...
The additional cost of spinlock access for the counters can be reduced to zero by having a set of 2^N counter groups, one counter group per each of the 2^N (N=8 here) table slices which already have their own spinlock. All else that's needed is to sum these groups when printing the statistics report.

Some routines for the perft() transposition table:

Code: Select all

PETBase::PETBase(const double frac): TranBase(TtIdPE, frac)
{
  DIPtr->LogCtor(name);
  storage = new PETItem[entrycount];
  storeptr = (void *) storage;
  Reset();
  DIPtr->LogMsg("  " + Encode());
}

PETBase::~PETBase(void)
{
  delete [] storage;
  DIPtr->LogDtor(name);
}

void PETBase::Reset(void)
{
#if (UseFastReset)
  FastReset();
#else
  ZOL64(index, entrycount)
    storage[index].Reset();
#endif
}

bool PETBase::Probe(const Hash& pathhash, NodeCount& count)
{
  const usize offset = (pathhash.GetQwrd0() & addrmask);
  const ui slice = CalcSlice(offset);
  const PETItemPtr probeptr0 = storage + offset;
  const PETItemPtr probeptr1 = storage + (offset ^ 1);
  bool found;

  IncrProbe();
  SLSliceVec[slice].Lock();
  if (probeptr0->hash == pathhash)
  {
    count = probeptr0->count;
    found = true;
  }
  else
  {
    if (EnableLsbReporter)
      LsbReporter(probeptr0->hash, pathhash);
    if (probeptr1->hash == pathhash)
    {
      count = probeptr1->count;
      found = true;
    }
    else
    {
      if (EnableLsbReporter)
        LsbReporter(probeptr1->hash, pathhash);
      count = 0;
      found = false;
    };
  };
  SLSliceVec[slice].Unlock();
  if (found)
    IncrMatch();
  return found;
}

void PETBase::Stash(const Hash& addrhash, const NodeCount count)
{
  const usize offset = (addrhash.GetQwrd0() & addrmask);
  const ui slice = CalcSlice(offset);
  const PETItemPtr probeptr0 = storage + offset;
  const PETItemPtr probeptr1 = storage + (offset ^ 1);
  PETItemPtr entryptr;

  IncrStash();
  SLSliceVec[slice].Lock();
  if &#40;probeptr0->count < probeptr1->count&#41;
    entryptr = probeptr0;
  else
    entryptr = probeptr1;
  if &#40;EnableTranStats && entryptr->hash.IsReset&#40;))
    IncrUsage&#40;);
  entryptr->hash = addrhash;
  entryptr->count = count;
  SLSliceVec&#91;slice&#93;.Unlock&#40;);
&#125;

AlvaroBegue
Posts: 920
Joined: Tue Mar 09, 2010 2:46 pm
Location: New York
Full name: Álvaro Begué (RuyDos)

Re: The cost of transposition table instrumentation

Post by AlvaroBegue » Mon Aug 24, 2015 11:05 am

Not an expert in multi-threaded code, but... Can you store per-thread counters and add them just before you show them to the user? (Make sure the counters are separated enough that they live in separate cache lines, or what I am suggesting might not help at all.)

User avatar
sje
Posts: 4675
Joined: Mon Mar 13, 2006 6:43 pm

Re: The cost of transposition table instrumentation

Post by sje » Mon Aug 24, 2015 11:33 am

AlvaroBegue wrote:Not an expert in multi-threaded code, but... Can you store per-thread counters and add them just before you show them to the user? (Make sure the counters are separated enough that they live in separate cache lines, or what I am suggesting might not help at all.)
Search for "C++ thread local storage" and you'll get an answer. And the answer depends on what version of C++ and what additional support is available (like Boost).

The underlying problem is that there is no direct linear mapping between a thread ID and an integer that's usable as an index to select among an array of objects. A mapping will always involve some set-up and an indirection for each access.

AlvaroBegue
Posts: 920
Joined: Tue Mar 09, 2010 2:46 pm
Location: New York
Full name: Álvaro Begué (RuyDos)

Re: The cost of transposition table instrumentation

Post by AlvaroBegue » Mon Aug 24, 2015 12:25 pm

sje wrote:
AlvaroBegue wrote:Not an expert in multi-threaded code, but... Can you store per-thread counters and add them just before you show them to the user? (Make sure the counters are separated enough that they live in separate cache lines, or what I am suggesting might not help at all.)
Search for "C++ thread local storage" and you'll get an answer. And the answer depends on what version of C++ and what additional support is available (like Boost).

The underlying problem is that there is no direct linear mapping between a thread ID and an integer that's usable as an index to select among an array of objects. A mapping will always involve some set-up and an indirection for each access.
I don't see why that's the case. You can easily access the transposition table through an object that is thread specific and collect your statistics there.

Joost Buijs
Posts: 986
Joined: Thu Jul 16, 2009 8:47 am
Location: Almere, The Netherlands

Re: The cost of transposition table instrumentation

Post by Joost Buijs » Mon Aug 24, 2015 12:44 pm

AlvaroBegue wrote:Not an expert in multi-threaded code, but... Can you store per-thread counters and add them just before you show them to the user? (Make sure the counters are separated enough that they live in separate cache lines, or what I am suggesting might not help at all.)
This is exactly what I'm doing in my engine, for each thread I have a struct which contains everything that belongs to that thread, including node counters etc.
When I make sure with an alignment pragma that these structs are at least separated one cache-line apart from each other, I don't see any performance degradation at all.
When I want to show the counters to the user I just add the separate thread counters together, this works beautifully.

mar
Posts: 2001
Joined: Fri Nov 26, 2010 1:00 pm
Location: Czech Republic
Full name: Martin Sedlak

Re: The cost of transposition table instrumentation

Post by mar » Mon Aug 24, 2015 3:21 pm

AlvaroBegue wrote:I don't see why that's the case. You can easily access the transposition table through an object that is thread specific and collect your statistics there.
Yes absolutely. TLS is a scarce resource and gets allocated for EACH thread within the process.
What you described is a common way to do it (per thread stats and then collect them once work is done - you don't need TLS for this).
EDIT: qouting MS docs:
"The constant TLS_MINIMUM_AVAILABLE defines the minimum number of TLS indexes available in each process. This minimum is guaranteed to be at least 64 for all systems. The maximum number of indexes per process is 1,088."

Post Reply