Page 4 of 30

Re: Questions for the Stockfish team

Posted: Mon Jul 19, 2010 7:42 pm
by Chan Rasjid
Concusion: Everything they does fits together very well. Thats the point why Stockfish is that strong.
This answer is not helpful.

Michael Sherwin asked only a simple question - "What is the secret ?" and those who know are not willing to tell in a simple manner.

Rasjid

Re: Questions for the Stockfish team

Posted: Mon Jul 19, 2010 8:10 pm
by mcostalba
Chan Rasjid wrote:
Concusion: Everything they does fits together very well. Thats the point why Stockfish is that strong.
This answer is not helpful.

Michael Sherwin asked only a simple question - "What is the secret ?" and those who know are not willing to tell in a simple manner.

Rasjid
None knows.


I thought this was already was clear, but indeed it seems was not clear enough...at least not for everybody.

Re: Questions for the Stockfish team

Posted: Mon Jul 19, 2010 8:38 pm
by Gerd Isenberg
mcostalba wrote:
"What is the secret ?"
None knows.
and from the same thead
mcostalba wrote:Stockfish is open because is a book on chess engine development, but instead of being presented as a collection of papers or as an "how to" documentation is presented in form of actual source code that, in my personal opinion, is the best way to present / teach software related stuff.
Doesn't your two statements sound a bit contradictional?
;-)

Cheers,
Gerd

Re: Questions for the Stockfish team

Posted: Mon Jul 19, 2010 8:43 pm
by lkaufman
Mangar wrote:Hi,

I like to go back to the original question. First thing I didn´t understand was the strength of Rybka Beta. Today I beleave that an evaluation function that is much optimized for a non LMR engine is not optimal for an LMR engine. Thus developing a new eval allready having LMR in search could be the "trick" of Rybka 1.0 Beta. Maybe Stockfish has the same "trick".

For the search improvements in Stockfish I think that there is no single idea that improved strength. It is the combination of pruning methods. Example:

LMR in the last 3-4 plys brings some elo. But if you do, your search gets a little too instable (often researching in LMR at ply 5-7 with results < alpha). As far as I have seen, Stockfish does huge cutoffs in the last plys (Futility, VBP, Razoring, Static Nullmove) without getting the instability of LMR at the last plys.

Greetings Volker
I agree with your LMR eval theory. It relates to another topic discussed under a different thread, why Stockfish, Crafty and other recent programs use such huge scores for dynamic features. Maybe it is precisely because of LMR that these things work.
Could you elaborate on "VBP"? I'm not familiar with that term.

Re: Questions for the Stockfish team

Posted: Mon Jul 19, 2010 9:47 pm
by Chan Rasjid
Stockfish is open because is a book on chess engine development, but instead of being presented as a collection of papers or as an "how to" documentation is presented in form of actual source code that, in my personal opinion, is the best way to present / teach software related stuff.
The source code book of Stockfish is the worst way to teach the "secrets of top chess engines". After reading through all the volumes of the Encyclopedia Britannica, what we get in the end is the end.

The Ippolit authors know how to write a top chess engine but they code it in an almost inscrutable style of C. It is nothing evil. They are just playing a game - "if you want to know the secrets you don't have it easy".

Why is Komodo a top engine ? Don rewrites a new chess program and it easily make it to the top because he knows the "secrets". It is the same with Naum and some others.

Bob Hyatt said that there is a gap between Crafty and Stockfish but "not an insurmountable gap". I say the gap is insurmountable. The gap exist because Crafty is NOT Stockfish and as long as Crafty is "not" Stockfish it may never close the gap. Why?

I'll ask Bob Hyatt : "Is the evaluation of Crafty significantly different from that of Stockfish ?" If he says it is a clone evaluator of Stockfish, then I quit - I don't know why the gap. But if he says "yes, the differences are quite many and substantial". I will then say "There you are, you have answered yourself where the gap is". A top engine's evaluation is not simple and all the many different aspect must fit as "...Everything should fit together perfectly". If Crafty is to want an elo 3000 evaluation and at the same time substantially different from that of Stockfish, then it must be "substantially different in a smarter manner".
It is very difficult to beat the evaluation of stockfish as these top engines have pushed chess programming to its limit starting from Rybka. BB mentioned in a post somewhere that (probably) what Vasik contributed to computer chess is scientific testing - but he must know fairly clearly what to test.

If Michael Sherwin were to ask "Are you sure you got it right - that evaluation is this important ?". My answer is in a question : "If you reverse the sign of the evaluation of Stockfish and play it against TSCP, which is the stronger program ?".

Rasjid.

Re: Questions for the Stockfish team

Posted: Mon Jul 19, 2010 11:38 pm
by bhlangonijr
Chan Rasjid wrote:
Stockfish is open because is a book on chess engine development, but instead of being presented as a collection of papers or as an "how to" documentation is presented in form of actual source code that, in my personal opinion, is the best way to present / teach software related stuff.
The source code book of Stockfish is the worst way to teach the "secrets of top chess engines". After reading through all the volumes of the Encyclopedia Britannica, what we get in the end is the end.

Rasjid.
Stockfish is nicely written with clean and commented C++ code. And this very great source code freely available for everyone in this planet, contains all the "secrets" one might want to know about Stockfish.

A few years ago I could hardly believe the second best engine in the world would be an open source one, and now people are accusing Stockfish team members not being helpful??

By the way, I strongly believe there is no free lunch.. :)

Re: Questions for the Stockfish team

Posted: Tue Jul 20, 2010 7:36 am
by mcostalba
Gerd Isenberg wrote:
mcostalba wrote:
"What is the secret ?"
None knows.
and from the same thead
mcostalba wrote:Stockfish is open because is a book on chess engine development, but instead of being presented as a collection of papers or as an "how to" documentation is presented in form of actual source code that, in my personal opinion, is the best way to present / teach software related stuff.
Doesn't your two statements sound a bit contradictional?
;-)

Cheers,
Gerd
No :-)

The question was what is the secret that makes SF so much stronger then Crafty. This is impossible to answer because you should do a comparative study feature by feature removing one after one, test, check the difference and proceed to the next one. Nodody has done something like this and I think nodody is interested to spend weeks in this useless exercise.

Bob tried to remove LMR from SF measuring about 60 ELO, but this is a different thing becaue LMR is present also in Crafty (although in a much more traditional form). It is also possible that comparing feature by feature is broken metodology, that the sum of two features (say LMR and null search) is bigger then the single contributions and this makes things even more difficult.

Currently there isn't _even_ agreement of how much SF is stronger then Crafty because personal Bob's list and the public ones differ for about 100 ELO !!!! in this regard. And both parties strongly believe that their is the best one.

So I believe the original question is impossible to answer in a serious way. Of course anybody is free to guess, this is cheap and takes just few minutes, but that's is not what the original question was searching for I think.

Re: Questions for the Stockfish team

Posted: Tue Jul 20, 2010 10:33 am
by Mangar
Hi Larry,

maybe I used an uncommon abbreviation. I ment Value Based Pruning, it´s a mix between late move reduction and futilit-/razoring pruning added by a "history" - like table that stores maximal values gained per piece and position.

If (eval + historic_gain(piece, position) + some_value(depth, moveno) < alpha)
{
skip_move;
}

Greetings Volker

Re: Questions for the Stockfish team

Posted: Tue Jul 20, 2010 10:47 am
by Mangar
Hi,

sorry the answer is not helpful for you. IMHO there is no "secret". Take all known eval- and search-features. Implement a chess engine without bugs and tune the features that they fit together well. Then you get an engine with the strength of stockfish.
It´s hard work, it needs millions of testgames to tune and maybe a little luck you don´t get in a local maximum to early.

When Fruit came out the trick of its early strength was late move reduction. This was easy to find out and implement in other chess engines. (Even it was very hard for me to agree it could work).
In my tests I wasn´t able to find a single trick that improves the strength of our chess engine.

Thus you try to find something that isn´t there, kind of frustrating I know it well. I tried hard to find a trick.

Greetings Volker

Re: Questions for the Stockfish team

Posted: Tue Jul 20, 2010 11:55 am
by Uri Blass
Chan Rasjid wrote:
Stockfish is open because is a book on chess engine development, but instead of being presented as a collection of papers or as an "how to" documentation is presented in form of actual source code that, in my personal opinion, is the best way to present / teach software related stuff.
The source code book of Stockfish is the worst way to teach the "secrets of top chess engines". After reading through all the volumes of the Encyclopedia Britannica, what we get in the end is the end.

The Ippolit authors know how to write a top chess engine but they code it in an almost inscrutable style of C. It is nothing evil. They are just playing a game - "if you want to know the secrets you don't have it easy".

Why is Komodo a top engine ? Don rewrites a new chess program and it easily make it to the top because he knows the "secrets". It is the same with Naum and some others.

Bob Hyatt said that there is a gap between Crafty and Stockfish but "not an insurmountable gap". I say the gap is insurmountable. The gap exist because Crafty is NOT Stockfish and as long as Crafty is "not" Stockfish it may never close the gap. Why?

I'll ask Bob Hyatt : "Is the evaluation of Crafty significantly different from that of Stockfish ?" If he says it is a clone evaluator of Stockfish, then I quit - I don't know why the gap. But if he says "yes, the differences are quite many and substantial". I will then say "There you are, you have answered yourself where the gap is". A top engine's evaluation is not simple and all the many different aspect must fit as "...Everything should fit together perfectly". If Crafty is to want an elo 3000 evaluation and at the same time substantially different from that of Stockfish, then it must be "substantially different in a smarter manner".
It is very difficult to beat the evaluation of stockfish as these top engines have pushed chess programming to its limit starting from Rybka. BB mentioned in a post somewhere that (probably) what Vasik contributed to computer chess is scientific testing - but he must know fairly clearly what to test.

If Michael Sherwin were to ask "Are you sure you got it right - that evaluation is this important ?". My answer is in a question : "If you reverse the sign of the evaluation of Stockfish and play it against TSCP, which is the stronger program ?".

Rasjid.
It is clear that intentionally bad evaluation can cause significant demage
but the question is not what is the value of the evaluation relative to intnentionally bad evaluation but what is the value of the evaluation relative to a simple evaluation.

I did not try it with stockfish but I found in the past that strelka with only piece square table evaluation can beat easily Joker and Joker is significantly stronger than tscp so I believe that search is important and it is not obvious that the relative advantage of stockfish is in the evaluation.

I think that it is possible to test it and you need to change the evaluation function of Crafty to give the same results as stockfish and test the new program against Crafty.

Uri