If we assume that fixed-size arrays containing smple numbers and indexed by numbers have an efficient implementation usig 8 bytes per element, you could simply define the hash table as one big arrays, the even elements containing the signatures, and the following odd element containing everything else.
You would then have to pack and unpack that element yourself. But only after you have established you have a hit. With bit-masking operations you can only access the lowest 32 bits of it, but this would be enough for move, depth and flags. The score you can retreive by dividing the original FP number through 2^24; this would leave the other bits as a small fraction, but who cares?
Code: Select all
if(brd_HashTable[index] == hashKey) {
var flags = brd_HashTable[index+1] & -1;
var move = flags & 0x7FFF;
var depth = flags >> 15 & 0x7F;
var score = brd_HashTable[index+1] / 0x1000000;
}
From the description of V* it seems indeed to work differently. Variables are by default 31-bit integers, not 64-bit floating point. Only if they are something different (e.g. because they would overflow 31 bit, or because they are strings or objects, a object is created to hold them. Presumably the original int then acts as a pointer to that object, and perhaps indicates the 'hidden class' to which the object belongs. (I assume that the hidden class for objects that just contain a single 64-bit FP number is somehow implied, rather than that it needs to be explicitly indicated in some other storage location, because it is so common. This is at least how I would do it.)
That would still mean that a general 64-bit number would need 12 bytes, a 4-byte index into the table of actual 64-bit values, plus the 8-byte value. The consequence is that we would be off far better using three 31-bit array elements of a V8 array, (12 bytes), than when using two 64-bit numbers there (24 bytes). On a conventional interpreter this would be 24 bytes vs 16 bytes, however.