kbhearn wrote:Code: Select all
inline Value mg_value(Score s) {
int16_t rv;
reinterpret_cast<uint16_t&>(rv) = s; // automatically truncated in conversion to uint16_t
return Value(rv); // darn enum
}
I'm afraid any solution using reinterpret_cast will necessarily rely on implementation-defined behavior:
Unlike static_cast, but like const_cast, the reinterpret_cast expression does not compile to any CPU instructions. It is purely a compiler directive which instructs the compiler to treat the sequence of bits (object representation) of expression as if it had the type new_type.
So it does what the union trick is supposed to do. On a weird machine where int16_t is stored big endian and uint16_t is stored little_endian, this mg_value() function will give surprising results.
I'm now wondering whether your sign-bit approach does not suffer from the same problem.
Code: Select all
inline Value mg_Value(Score s) {
static const uint32_t mask = 0xFFFFU;
static const int sign = 0x8000;
return ((int)(s & mask) ^ sign) - sign;
}
I guess it does work correctly. Since s and mask are both uint32_t, s & mask is certain to give the expected result. The result is then cast to int, which is well defined because the result value is in the range of int. xor-ing with sign works fine, since sign is an int. Subtracting sign is again OK.
So the manual sign extension approach seems fully correct. On the weird machine I mentioned, there may be changes in byte ordering when going from uint32_t to int, but that does not pose any problems. The casts will then compile to byte swap instructions.
Your approach subtracts 0x8000 twice in case (s & mask) >= 0x8000, then casts. If (s & mask) < 0x8000, it adds 0x8000 and subtracts it again, then casts. So it is equivalent to:
Code: Select all
inline Value mg_value(Score score) {
uint16_t u = score;
return Value(u < 0x8000 ? u : u - 0x10000);
}
So it is not even necessary to cast to int16_t.
Code: Select all
inline Value eg_value(Score score) {
uint16_t u = (score + 0x8000) >> 16;
return Value(u < 0x8000 ? u : u - 0x10000);
}
Another attempt:
Code: Select all
inline Value mg_value(Score s) {
int v = (uint16_t)s;
return Value(v < 0x8000 ? v : v - 0x10000);
}
or
Code: Select all
inline Value mg_value(Score s) {
int v = s & 0xffff;
return Value(v < 0x8000 ? v : v - 0x10000);
}
We don't even need uint16_t / int16_t to be defined. As in your original solution, in fact.