Houdini wrote:code that works well in VS doesn't seem to work in gcc, for reasons I still don't quite understand
GCC has become much pickier about exploiting undefined behaviour for optimisation. Means, assuming that the code doesn't exhibit undefined behaviour, but if it does, then all bets are off what the compiled binary will do.
The usual suspects here are e.g. pointer null checks after dereferencing. Even if that happens only in one rare code path, GCC may just eliminate the whole null pointer check.
Another frequent source is pointer aliasing. GCC has been assuming no pointer aliasing since version 4.9 per default when optimising with -O02 or higher. If the code crashes but stops doing so when using -fno-strict-aliasing , then you have an aliasing issue. I recommend testing with strict pointer aliasing, but unless benchmarks prove that there is a significant speed gain, I'm using -fno-strict-aliasing for release builts.
Another thing is that strictly speaking, signed integer overflow isn't defined in C because the compiler is free to choose any implementation. 2's complement, 1's complement or absolute value plus sign. These days, every machine has 2's complement, but it isn't in the standard. Testing a build with -fno-strict-overflow may be useful.
Sometimes, GCC can catch such things at compile time when using:
-Wall -Wmaybe-uninitialized -Wstrict-aliasing -Wlogical-op -Wno-cast-align
Last, GCC offers a bunch of sanitiser options:
-fsanitize=address
-fsanitize=bounds
-fsanitize=object-size
-fsanitize=alignment
-fsanitize=null
-fsanitize=undefined
-fsanitize=shift
-fsanitize=signed-integer-overflow
-fsanitize=integer-divide-by-zero
Use these for compiling/linking and run the binary. Error messages will be visible on stdout (or stderr, don't remember). Works only under Linux, not with MingW or Cygwin.
Oh, and from my experience, the MS C runtime can be happy with things like closing open files twice, but I remember crashes when doing this with GCC. May also apply to freeing memory twice.