My experience with Linux/GCC

Discussion of chess software programming and technical issues.

Moderators: hgm, Harvey Williamson, bob

Forum rules
This textbox is used to restore diagrams posted with the [d] tag before the upgrade.
User avatar
hgm
Posts: 23477
Joined: Fri Mar 10, 2006 9:06 am
Location: Amsterdam
Full name: H G Muller
Contact:

Re: My experience with Linux/GCC

Post by hgm » Wed Mar 23, 2011 6:20 pm

Indeed, that was the idea. Just write

Code: Select all

s.i = s1.i + s2.i;
The sign bit of one would contaminate the other, of course, and without making an assumption on the edianness of the hardware you don't know which is the contaminated one. But perhaps you don't care. I usually store some flag bits in the low bits of score tables (like null-move enable flags in the material table). The provide a bit of noise to the score, but who cares?
UncombedCoconut wrote:I once tried replacing Stockfish's Score type with a struct that used two 16-bit fields, and got identical benchmark results. The result surprised me at the time. (I had a pre-conceived notion that the compiler would generate inefficient code for loading and storing bitfields.)
A good optimizer should recognize if your bit-field is an addressible unit, like (x>>8 & 0xFF) is just the second-lowest byte, and could be fetched through a MOVB or MOVSX instruction. so the point is not to make it more efficient, but to make it more portable. And perhaps even optimally efficient without such a high optimization level. When you define 4 byte field within the integer, even the most stupid compiler would use MOVB when you access one of those, and interpret the sign bit properly.

Daniel Shawul
Posts: 3724
Joined: Tue Mar 14, 2006 10:34 am
Location: Ethiopia
Contact:

Re: My experience with Linux/GCC

Post by Daniel Shawul » Wed Mar 23, 2011 6:33 pm

I use exactly that except the union is annonymous so that I don't have to do
move.s.mg_score... Some linux compilers complain about it though.
Anyway, how much do you gain by doing bit twidling ?

UncombedCoconut
Posts: 319
Joined: Fri Dec 18, 2009 10:40 am
Location: Naperville, IL

Re: My experience with Linux/GCC

Post by UncombedCoconut » Wed Mar 23, 2011 6:37 pm

rvida wrote:GCC with -O2 setting optimizes very aggressively. In most cases where it makes a potentially 'unsafe' assumption it gives at least a warning. I had to rewrite some code pieces due to pointer-aliasing but that was pretty straightforward.
My favorite example of GCC's aggression involves enum types: it would assume their range was limited by the declared values. Combined with the "value range propagation" optimization and some technically incorrect code in older Stockfishes, this led to it happily optimizing eg_value(score) to zero! You can imagine how well those builds played. ;)

SF was clearly not the only program affected, as the GCC devs noted in their changelog for version 4.6:
G++ no longer optimizes using the assumption that a value of enumeration type will fall within the range specified by the standard, since that assumption is easily violated with a conversion from integer type (c++/43680). The old behavior can be restored with -fstrict-enums.

mcostalba
Posts: 2684
Joined: Sat Jun 14, 2008 7:17 pm

Re: My experience with Linux/GCC

Post by mcostalba » Wed Mar 23, 2011 8:04 pm

hgm wrote: The provide a bit of noise to the score, but who cares?
I think we care ;-)

We really want our poor man functionality checksum that is node count on a set of fixed positions is the same for all binaries out of the same sources, be it 32 or 64bit, Intel or PowerPc (read big endian Mac).

This is very important both for debug and safe development.

User avatar
hgm
Posts: 23477
Joined: Fri Mar 10, 2006 9:06 am
Location: Amsterdam
Full name: H G Muller
Contact:

Re: My experience with Linux/GCC

Post by hgm » Wed Mar 23, 2011 8:08 pm

Wel, surely he endianness must be known at compile time. You could make an #ifdef BIGENDIAN which declares the two short fields in the reverse order.

mcostalba
Posts: 2684
Joined: Sat Jun 14, 2008 7:17 pm

Re: My experience with Linux/GCC

Post by mcostalba » Wed Mar 23, 2011 8:09 pm

UncombedCoconut wrote: SF was clearly not the only program affected, as the GCC devs noted in their changelog for version 4.6:
Perhaps a bit off-topic, but, because I see you have the new 4.6, I would like to ask if you see some warnings compiling SF 2.0.1

Thanks
Marco

User avatar
Evert
Posts: 2923
Joined: Fri Jan 21, 2011 11:42 pm
Location: NL
Contact:

Re: My experience with Linux/GCC

Post by Evert » Wed Mar 23, 2011 8:11 pm

mcostalba wrote: We really want our poor man functionality checksum that is node count on a set of fixed positions is the same for all binaries out of the same sources, be it 32 or 64bit, Intel or PowerPc (read big endian Mac).

This is very important both for debug and safe development.
None of that should be affected by using a union to wrap the two 16-bit values in a 32-bit integer though.

mcostalba
Posts: 2684
Joined: Sat Jun 14, 2008 7:17 pm

Re: My experience with Linux/GCC

Post by mcostalba » Wed Mar 23, 2011 8:14 pm

hgm wrote:Wel, surely he endianness must be known at compile time. You could make an #ifdef BIGENDIAN which declares the two short fields in the reverse order.
The problem is not the #ifdef BIGENDIAN, well, yes it is a bit of problem also that ;-) but the biggest part is the modification to the Makefile to correctly handle the BIGENDIAN flag under all cases / compilers, and this is not so trivial ! Apart for being extremely ugly and error prone.

P.S: Answering also to the next post: The problem is that the sign bit you don't know in advance if it goes in the mg_value or eg_value part so that result of an evaluation could be different between big and little endian.

User avatar
hgm
Posts: 23477
Joined: Fri Mar 10, 2006 9:06 am
Location: Amsterdam
Full name: H G Muller
Contact:

Re: My experience with Linux/GCC

Post by hgm » Wed Mar 23, 2011 8:36 pm

You mean there is no standard macro for this? :shock: That sure would be a black mark on the compiler writers. What could be more natural than wanting to know the endianness of the machine? I never really tried any other compiler than gcc for x86, but I always assumed that symbols like WIN32 would be universally defined.

Btw, my engines use packed integers too. E.g. in the PST of HaQiKi D the 16 upper bits are the true PST values, and the two lowest bytes are attack points on the white and black Palace, respectively. But to prevent problems with overflow of the sign bit, I use excess encoding for the low bytes, like Bob suggests.

wgarvin
Posts: 838
Joined: Thu Jul 05, 2007 3:03 pm
Location: British Columbia, Canada

Re: My experience with Linux/GCC

Post by wgarvin » Wed Mar 23, 2011 8:49 pm

hgm wrote:You mean there is no standard macro for this? :shock: That sure would be a black mark on the compiler writers. What could be more natural than wanting to know the endianness of the machine? I never really tried any other compiler than gcc for x86, but I always assumed that symbols like WIN32 would be universally defined.

Btw, my engines use packed integers too. E.g. in the PST of HaQiKi D the 16 upper bits are the true PST values, and the two lowest bytes are attack points on the white and black Palace, respectively. But to prevent problems with overflow of the sign bit, I use excess encoding for the low bytes, like Bob suggests.
There is no single standard macro, but there are various pre-defined macros that can be used to detect or infer compiler and endianness.

Here's a header file (posh.h) that can detect a lot of different compilers/platforms.

Post Reply