Sven Schüle wrote:Graham Banks wrote:Adam Hair wrote:........ so it seems to me the fairest thing would be to test every engine above 2600 Elo (CCRL). Unfortunately, that is not proving to be possible at the moment.
Well your analysis has certainly changed my opinion to a degree.
In light of the fact that it seems that many engines might have heavily borrowed ideas from strong open source engines, I'm more amenable now to the the idea of testing Houdini as the Ippo family representative, and also to continuing testing Naraku.
It would seem pretty difficult now to tar one or two engines without having lingering doubts about others.
Must have a think about all this.
I must admit that this analysis is indeed very interesting, and I highly appreciate the effort. But I don't see how it changes anything substantial for the "engine origins" discussion (and as I understand Adam's statements this was also not his main intention). The key point of this analysis is that certain engines behave similar to each other regarding their move selection. There are several possible explanations for this similarity. A common explanation is that these engines are partially based on the same ideas and algorithms.
Some people may think that selecting the same moves as another engine means that code must have been copied from that other engine. But "borrowing ideas" should not be mixed up with "borrowing implementation (code)", and it is important to understand that selecting the same moves as another program *may* imply reuse of algorithms but *does not* imply reuse of code. (And, for those not being familiar with formal logic, it is also important to understand the meaning of "to imply"
)
The former, "borrowing ideas", is fully o.k., even to a high extent, although one could argue that having two engines sharing the same ideas but implementing them differently is somewhat less "interesting" than having two engines with different ideas.
The latter, "borrowing implementation", is o.k. as long as copyright and license are not violated.
If your rule of thumb were that there should be only one representative from a "family" being included into testing activities, and if it is defined that an engine B belongs to the "family" of another (original) engine A if it is based on at least a certain percentage X of ideas and algorithms taken from A, then you would face these two problems:
- you would have to define the value of X, which is practically impossible;
- you would have to exclude a lot of engines from testing, including all Ippo* engines, Houdini, Fire, Strelka, Rybka, but - according to the data presented here - perhaps also Onno, Loop, Critter, Thinker, Delfi, Naraku, TwistedLogic, SmarThink, Hamsters, Pupsi, Daydreamer, Philou, Cyrano, ... - which obviously goes into the wrong direction.
Another problem would be, for consistency you would also have to exclude those open source engines which are legally derived from another engine in full GPL compliance, like Toga. But for which reason? Legally sharing implementation does of course imply sharing ideas and algorithms. This shows that the notion of "borrowing ideas" does not qualify as a guideline to decide about testing or not testing an engine.
In my opinion a better rule of thumb would be to continue concentrating on the cases where copyright/license violation is proven - difficult enough as it is, but looking more consistent to me at least.
This rule does not help much for the Ippolit case itself, though, since there is no clear agreement yet about the question whether heavily reusing code by reverse engineering a commercial program is considered as a copyright violation, and also whether such a heavy code reuse actually happened in case of Ippolit. In my opinion both get a "yes" but certainly others disagree.
As a second rule, I would still insist on not testing engines by anonymous authors.
Sven