Don wrote:Highendman wrote:I'm looking forward to taking a new exciting legit engine for a spin.
In your tests, what's the ELO increase vs. Doch?
Another question: do you explicitly try 'give it a playing style' or is it more about tuning eval/search, and whatever gets the highest elo you're happy with?
We try to make the engine play as much in a human style as possible, but currently strength trumps style. When it's a close call we try to make it evaluate more like a human grandmaster would.
If you try the engine to play as much in a human style then it may be good if you change the evaluation to evaluate branches and not leaves(I think that it is a good idea in many games and not only in chess).
The idea is that if I calculate some line and see that the position becomes better for me then it should increase the evaluation and not everything is static knowledge.
I will give an example.
When you calculate a line you can evaluate every node in the branch.
If you calculate in some line something like
1.a4 evaluation 0.20 for black
1...Rb8 evaluation 0.20 for black
2.a5 evaluation 0.10 for black
2...Ra8 evaluation 0.09 for black
3.b4 evaluation 0.00
Then the final evaluation should be better than 0.00 because white improved the position.
practically you have an array of number of static evaluations from white point of view.
eval[0]=-0.20(evaluation of the root)
eval[1]=-0.20(evaluation of the position 1 ply after the root)
eval[2[=-0.10(evaluation of the position 2 plies after the root)
eval[3]=-0.09(evaluation of the position 3 plies after the root)
eval[4]=0.00(evaluation of the position 4 plies after the root)
I think that you should have a function that translate this array to a number that is bigger than 0.00 for white.
Humans do not think in terms of exact numbers but they can think that some idea is good because they can improve the position based on their calculation.
Uri