A transposition table (TT) basically acts as software-based finite cache to relieve pressure on the much slower search. In particular, using the common implementation array of N-entry buckets , the TT mimics an N-way set-associative cache. The average lookup cost of a position is given by (see any lecture notes on caches)

Code: Select all

`T_avg = T_lookup * hit_rate + T_search * (1-hit_rate)`

Increasing the overall TT size will increase the hit-rate and thereby reduce the number of capacity misses, but will eventually also increase T_lookup (TLB misses). CPU caches benefit from 2-level caches: an L1 cache which is small and has very fast T_lookup (because of relatively low set associativity), with a larger but slower a L2 cache with a larger set-associativity (eg 4-way or 8-way) to reduce conflict misses.

So has anyone tried to do something similar with the TT? Eg a direct mapped TT1 (with always replace) of a few Mb, with a much larger secondary TT2 in the usual 4-bucket implementation. The TT1 might eg make qsearch TT lookups more palatable.

Obviously, the various cost components are depth dependent, so one could make the TT2 lookup in case of a TT1 miss conditional on remaining depth (eg not during qsearch).