But I think we can do better than this, using pdep (for compression) and pext (for decompression).
Compression algorithm
Code: Select all
itemname | bits | definition
--------------------------------------------|-----------
occupied | 64 |
pdep(pawns, occupied) | cnt(occupied) |
pdep(knights, remaining) | cnt(remaining) | remaining = occupied & ~pawns
pdep(bishops, remaining) | cnt(remaining) | remaining &= ~bishops
pdep(rooks, remaining) | cnt(remaining) | remaining &= ~rooks
pdep(queens, remaining) | cnt(remaining) | remaining &= ~queens
kings = remaining | 0 |
turn | 1 |
ep | 1 |
pdep(epSquare, candidates) | cnt(candidates)| candidates = ((white & pawns & rank4) << 8) | ((black & pawns & rank6) >> 8)
pdep(castleRooks, rooks) | cnt(rooks) | castleRooks = rooks with associated castling rights (any color)
rule50 | 7 |
fullMove | 10 | useless: conveys no position information
It's difficult to put upper bounds on these cnt(). And those will tend to be way higher than actual averages on real samples. Only implementing it and compressing large random FEN collections will tell the practical compression ratio.
Decompression algorithm
Left as an exercise to the reader

Is this useful? Absolutely not. But it's fun!