For illustration, consider the KNN KN database, where white has two knights. Positions are sorted by a hash table: first by the position of the black knight, the two white knights, and then the white king. Last, for any position of the black king that results in a win or loss, there is stored the position of the black king, whether the position wins or loses, and the

**ideal next move**(not the number of moves to mate). There are at most 16 positions for the black king, assuming we restrict it to a predetermined quadrant, and here no position permits more than 40 possible moves (and only rare setups will allow more than 64), so the terminating node of this hash table has only 11 bits of data.

Based on a past discussion in this forum, there are ~40,000 win/loss positions with this piece set. So, the terminating-leaf data in this table requires ~440,000 bits = 55KB. There is some overhead memory requirement from the hash table, but I think this can be made small (and I suspect that most of the hash tables can be eliminated simply because they are empty). The equivalent Nalimov tablebase requires ~160KB. The sacrifice is that you don’t know how many moves the mate will require, but when it is desired, it is quick to follow moves through the subsequent databases and see how many moves yield checkmate – so that all of the information of a Nalimov tablebase can quickly be gleaned from this smaller file.

How does this strike you all as a middle-ground between bitbases and tablebases?