Page 5 of 7

Re: An incomplete discussion !!!!!

Posted: Mon Dec 22, 2014 1:23 am
by carldaman
Sylwy wrote:
1.-I suspect for a free The Baron 3.xx together with a free Diep (the newest class) ! :lol:

SilvianR :wink:
Any reason for such optimism, Herr Ruxy ? :D
It would be awesome! 8-)

Cheers,
CL

Re: Chessprogams with the most chessknowing

Posted: Mon Dec 22, 2014 4:02 am
by jdart
I think that is very perceptive. It especially applies to the current generation of programs, which are using heavy amounts of specultative pruning. If something escapes them, they appear blind. If the pruning works as designed, then they appear brilliant.

But I am not sure I agree with the last paragraph. A program with more knowledge and some idea about plans is still a selective program (if it is not it is probably very weak). So it has the same kind of hit-or-miss behavior as the fast searcher, just maybe less of it.

--Jon

Re: An incomplete discussion !!!!!

Posted: Mon Dec 22, 2014 6:54 am
by Sylwy
carldaman wrote:
Sylwy wrote:
1.-I suspect for a free The Baron 3.xx together with a free Diep (the newest class) ! :lol:

SilvianR :wink:
Any reason for such optimism, Herr Ruxy ? :D

Cheers,
CL
Absolutely no one ! :lol:

SilvianR :wink:

Re: Chessprogams with the most chessknowing

Posted: Mon Dec 22, 2014 2:03 pm
by Stan Arts
Indeed,
I quite like how he describes well one engine outsearching the other and how that can often translate to perceived knowledge or lack thereof to the observer.

One simply has to be careful with making statements about knowledge then in regards to this or that program based on gameplay.

For example a lot of big claims have been made in the past, but when computerchess moves on are quickly forgotten about. (Fe. when Rybka was just released it was speculated to have a huge superknowledgeable evaluation previously unseen.)

Re: Chessprogams with the most chessknowing

Posted: Sat Feb 18, 2017 11:44 am
by Vinvin
pkumar wrote:
1 6k1/8/6PP/3B1K2/8/2b5/8/8 b - - 0 1
2 8/8/r5kP/6P1/1R3K2/8/8/8 w - - 0 1
3 7k/R7/7P/6K1/8/8/2b5/8 w - - 0 1
4 8/8/5k2/8/8/4qBB1/6K1/8 w - - 0 1
5 8/8/8/3K4/8/4Q3/2p5/1k6 w - - 0 1
6 8/8/4nn2/4k3/8/Q4K2/8/8 w - - 0 1
7 8/k7/p7/Pr6/K1Q5/8/8/8 w - - 0 1
8 k7/p4R2/P7/1K6/8/6b1/8/8 w - - 0 1
Nice draw positions for fooling engines! Are there some more?
Sure :-)

[d]8/8/8/8/2b1k3/3R4/3RK3/8 w - - 0 1

[d]8/3k4/8/8/P2B4/P2K4/P7/8 w - - 0 1
And more here : https://en.wikipedia.org/wiki/Fortress_(chess)

Re: Chessprogams with the most chessknowing

Posted: Sat Feb 18, 2017 8:56 pm
by leavenfish
Resurrecting this old thread...just to see if anyone had in 2017 any new takes on this.

Static evaluation seems like an idea that would work best...not in 'game play', but say looking at some random point (say move 13) in ones opening repertoire and trying to evaluate if one should chose between one of say 3 branches to consider making your main line.

Re: Chessprogams with the most chessknowing

Posted: Sun Feb 19, 2017 12:13 am
by Cardoso
It is said Komodo to have one of the best evals.
But Vincent Diepeveen also claimed his program Diep to have a very good eval too, I remember he even challenged the original Komodo programmer for a one ply match, wich didn't happen.
I remember in one of the old Fritz releases Frans Morsch claimed Fritz to have the most knowledgeable evaluation function of all chess engines at the time, when asked what he did to maintain speed, he answered "there are pretty smart data structures", I don't know what that means, probably something like not needing to recompute all eval terms every time the eval is called.
Anyway an engine is the sum of eval + search, and only that sum can produce a program that actualy can play chess at an high level.
I used to consider the eval as probably the most important part of an engine.
Turns out I was wrong, time proved (at least to me) that engines that prune like hell and have light evals can be really strong.

best regards,
Alvaro

Re: Chessprogams with the most chessknowing

Posted: Sun Feb 19, 2017 12:45 am
by mjlef
Measuring this is pretty hard. Larry and I have discussed this a lot. It is not very hard to make two programs (with full source code, of course) search alike, so we can play them against each other to try and measure the evaluation quality. But values that work at shallow depths do not always also work in deeper searches. One example is king safety. The strongest programs I have seen source code (or written) have very high values for say the ability to check the opponent's king. The values often look crazy high. This works in deep searches but seem bad at shallow searches. So the effect is if a program is tuned for a shallow search it might look like it has a better eval than one better suited for deep searches.

But anyway, we love trying to measure these things. I can confirm that Komodo's eval is "bigger" (has more terms and does more things) that Stockfish. I hope it is better, but it is very hard to prove, or even measure.

Re: Chessprogams with the most chessknowing

Posted: Sun Feb 19, 2017 5:14 am
by MikeB
Cardoso wrote:....
Anyway an engine is the sum of eval + search, and only that sum can produce a program that actualy can play chess at an high level.
I used to consider the eval as probably the most important part of an engine.
Turns out I was wrong, time proved (at least to me) that engines that prune like hell and have light evals can be really strong.

best regards,
Alvaro
+1 good comment
search is key, eval is secondary, it's probably the 80/20 rule. - stockfish is where it is because of search.cpp , not because of evaluate.cpp , I have played around with both a lot and I speak from my experiences

Re: Chessprogams with the most chessknowing

Posted: Sun Feb 19, 2017 6:50 am
by Uri Blass
mjlef wrote:Measuring this is pretty hard. Larry and I have discussed this a lot. It is not very hard to make two programs (with full source code, of course) search alike, so we can play them against each other to try and measure the evaluation quality. But values that work at shallow depths do not always also work in deeper searches. One example is king safety. The strongest programs I have seen source code (or written) have very high values for say the ability to check the opponent's king. The values often look crazy high. This works in deep searches but seem bad at shallow searches. So the effect is if a program is tuned for a shallow search it might look like it has a better eval than one better suited for deep searches.

But anyway, we love trying to measure these things. I can confirm that Komodo's eval is "bigger" (has more terms and does more things) that Stockfish. I hope it is better, but it is very hard to prove, or even measure.
I think that the way to try to decide which evaluation is better should be by evaluation contests based on a fixed search rules to test both evaluations with the same number of nodes.

The question is how to define the fixed search rules.

Evaluation should be able to compare between positions at different depths(otherwise bonus for the side to move is going to give nothing) so obviously alpha beta with no extensions and no pruning is not relevant here.

I suggest alpha beta with random reduction.
At every node you reduce 1 ply with probability of 50%.

I suggest no qsearch because I think that a good evaluation should be good also at evaluating positions with many captures without qsearch.

I suggest also to have a rule that the engine has to search at least 1,000,000 positions per second in some known hardware from every position(you can decide about a different number but the idea is not to allow doing too much work in the evaluation because by definition doing much work is the job of the search).

The target is to prevent the engine to search many lines in the qsearch and claim that this heavy qsearch is part of the evaluation function.