cdani wrote:Leto wrote:I'm not, i'm enjoying every moment. I don't see these elo improvements as dumb, search improvements are very important. Finding the correct move a minute sooner can be the difference between winning or drawing, and sometimes losing.
Search improvements are of course not the only thing they've done, Stockfish for example recently improved its king safety, and for Stockfish 7 they're planning on improving its syzygy implementation which should increase its endgame accuracy.
Sure. But this is like extruding an existing technology, not like creating a new one. Tim is advocating for the second.
Trying to improve an engine only by evaluation is a hard work.
Note that every move produced by an engine is based on the probability to win the game implemented via evaluation function. Evaluation can only measure the static elements for a given position, and it is (and always will be) imperfect.
We have different techniques to tune our evaluation function, but there will be always holes and missed knowledge, and so, the horizont effect is always present.
This is why working on search can lead to faster elo improvements. The hidden and missed factors are covered by the search by reaching a ply more.
So the question is, what knowledge implement?
The state of the art today of top engines evaluation cover the general aspects of the game, and with a lot of testing, they have reached a good balance between risk & reward, leading to super GM play.
When we add a new piece of knowledge in the evaluation function we must consider that the extra computing time pays off. So again, what knowledge implement?
The only way is try-and-error. Testing and more testing.... Unless we can build a technology to discover new patterns and new knowledge automatically from existing games, so this technology could act as a grandmaster for us saying things like "hey, you must improve the R vs PPP endgame".
We are still struggling on that.
Regards.