These are only samples, where I use only one code for both colors, but during a lot of use like var[color].......
I am facing the choice of what kind of code like this to write for all my eval functions (and maybe other as well). It looks to me that second has the adventage of less code lines, but more processor time, isn't it? Do you have any ideas or advised on what model to use and why?
thx
Disadvantage of separate code is bloating of generated machine code, advantage for separate code is possibility to avoid if (color==WHITE) completely.
Your both variants are similar and does not have advantages at all -- you duplicate both color test and color specific code. Test if (color==WHITE) is hidden in your evalSquare(color, sq) and in many other places.
enum { White=0, Black=1 };
//...
int someValue[2][BOARD_SIZE];
//...
inline int someEvalFunction(int color, int sq) {
//...
return someValue[color][sq];
}
This way I avoid both "if (color == White)" code and duplication of large code pieces. The extra cost is the extra dimension of arrays.
This was an early, major design decision for my engine. I know that there may be other important reasons to use different values for constants White and Black which may have other advantages.
These are only samples, where I use only one code for both colors, but during a lot of use like var[color].......
I am facing the choice of what kind of code like this to write for all my eval functions (and maybe other as well). It looks to me that second has the adventage of less code lines, but more processor time, isn't it? Do you have any ideas or advised on what model to use and why?
thx
I have done it both ways. The speed difference is not very significant, but for me, the second approach (less code) was actually a bit faster in Crafty. I made this conversion a couple of months back, primarily to simplify development and debugging. Having the same code duplicated (with different constants) for black and white leaves lots of places for errors to creep in and I thought it was about time to get rid of that. You will always have to have sets of constants for black/white since the squares are different, but xx[color] makes that pretty clean...
This code will be *exactly* as fast as the two duplicated functions where you manualy replaced color with W or B. Your function evalSquare must be declared inlined. Duplicating code is so old school.
Kempelen wrote:
This code will be *exactly* as fast as the two duplicated functions where you manualy replaced color with W or B. Your function evalSquare must be declared inlined. Duplicating code is so old school.
HJ.
What I was refering about, is not if one function is faster than two , but the code with var[x] or var_for_color. My code was not complete in the first post. Let see again:
if (color==white) {
evalWhite += some_code.....using the black piezes
evalWhite += some_code.....using the white piezes
17 operations more of this kind
} else {
evalBlack += some_code.....using the white piezes
evalBlack += some_code.....using the black piezes
17 operations more of this kind
}
eval[color] += somecode[xturno]
eval[color] += somecode[turno]
17 more operations of this kind
In the first example, for a pieze, only a condition is done, with 19 operations "normal" operations without using dimensional arrays
In the second one, no condition is executed, but 19 operations using var[color] type, whitch mean that at execution time, 2 add operations has to be done per line
Is appear to me that example 2 is more time consuming.....
It will be the same if only a operation will be done inside the if statement, but it is not the case.
I think prof. Hyatt had better result with second type because the way and the order in whitch he evaluates positional characteristics is different to the example I am refering.....
Kempelen wrote:
What I was refering about, is not if one function is faster than two , but the code with var[x] or var_for_color. My code was not complete in the first post. Let see again:
if (color==white) {
evalWhite += some_code.....using the black piezes
evalWhite += some_code.....using the white piezes
17 operations more of this kind
} else {
evalBlack += some_code.....using the white piezes
evalBlack += some_code.....using the black piezes
17 operations more of this kind
}
eval[color] += somecode[xturno]
eval[color] += somecode[turno]
17 more operations of this kind
In the first example, for a pieze, only a condition is done, with 19 operations "normal" operations without using dimensional arrays
In the second one, no condition is executed, but 19 operations using var[color] type, whitch mean that at execution time, 2 add operations has to be done per line
Is appear to me that example 2 is more time consuming.....
It will be the same if only a operation will be done inside the if statement, but it is not the case.
I think prof. Hyatt had better result with second type because the way and the order in whitch he evaluates positional characteristics is different to the example I am refering.....
I vote for option 2 for better maintainability - to let the compiler do the whole optimization work by either instantiate sets of routines for white and black by inlining with actual const parameter or (explicitly) by template parameter.
eval[color] += somecode[xturno]
eval[color] += somecode[turno]
17 more operations of this kind
In the first example, for a pieze, only a condition is done, with 19 operations "normal" operations without using dimensional arrays
In the second one, no condition is executed, but 19 operations using var[color] type, whitch mean that at execution time, 2 add operations has to be done per line
Is appear to me that example 2 is more time consuming.....
No, Look at the code I posted.
Your function is inlined, there is no call to a function, all occurence of 'color' is replaced by a constant, the compiler will optimize the code and there will be no run-time overhead.
wooouh..... I modify my code yesterday, as my eval func is not so big, and avoiding code for white and black and making a generic routine for both colors, make the the eval function a 40% less eficcient. I measure the difference with the same 10 positions at the same fixed depth.!!!
wooouh..... I modify my code yesterday, as my eval func is not so big, and avoiding code for white and black and making a generic routine for both colors, make the the eval function a 40% less eficcient. I measure the difference with the same 10 positions at the same fixed depth.!!!
I did this in the crafty version 22.0 and carefully compared to version 21.x (whatever was current at the time) to make sure the scores matched exactly. When I was done, version 22.0 was very slightly faster, most likely a result of a significantly reduced cache footprint which offset the slightly slower array accesses...
You can probably compare crafty 22.0 and the last 21.x to see what I mean...
BTW you are not using "local data" correct? that would be a performance killer. You need to make this kind of data either global or static so that it is not constantly re-initialized each time that procedure is called...