Similarity tests

Discussion of anything and everything relating to chess playing software and machines.

Moderators: hgm, Harvey Williamson, bob

Forum rules
This textbox is used to restore diagrams posted with the [d] tag before the upgrade.
Post Reply
nabildanial
Posts: 104
Joined: Thu Jun 05, 2014 3:29 am
Location: Malaysia

Re: Similarity tests

Post by nabildanial » Wed Oct 01, 2014 1:59 pm

While I second your statement, I think somebody needs to update the dendrogram thingy, it looks more accurate and reliable to me. The similarity tool has its own flaws, so depending on it as it is the only way to detect clones is just absurd.

Adam Hair
Posts: 3201
Joined: Wed May 06, 2009 8:31 pm
Location: Fuquay-Varina, North Carolina

Re: Similarity tests

Post by Adam Hair » Wed Oct 01, 2014 2:26 pm

Exactly, Gabor!

Code: Select all

As a Test To Detect Clones and Derivatives
 
This tool, in conjunction with other tests, can be used to detect possible clones and derived engines. To be used most effectively, some preliminary work and some observations are needed. A database of unique engine pairs is needed to conduct comparisons.
 
A result of 60% moves matched for a pair is meaningless without other pairwise results to compare the number to. Ideally, enough engines are used to construct the database so that the distribution of the matched move percentages is approximately normal.
 
Also, it should be noted that the amount of time an engine has to think about each position has some effect on the moves chosen, and ultimately on the matched move percentages. One can choose to use a default time period, with some adverse effect on the accuracy of the test. Or one may chose an engine to calibrate the other engine times to.
 
Once an engine is chosen and its thinking time (X) is set, then the other times can be determined by this formula: Y = X*(2^((Elo Diff)/120)).
 
This formula works well for newer engines, but most likely less well for older engines. In any case, it does give more accurate matched move percentages than using a default time.
 
An additional observation is that a minimum of 5 standard deviations should be used to judge a pair percentage is beyond the norm. If 1000 unique engines (where unique means unrelated engines with unique authors) is considered to be an upper limit, then there could be possibly 999*1000/2 = 499,500 pairs of unique engines.
 
4 standard deviations representes an event that occurs 1 time out of approximately 31,600, or approximately 16 times in 499,500.
 
5 standard deviations represents 1 time in 3,448,556.
 
While not a guarantee of avoiding a false positive, the threshold of 5 standard deviations greatly reduces the chance of it occurring.
 
The drawback of setting the false positive threshold so high is that more false negatives will occur (two similar engines would be deemed non-similar). However, there are several things to consider.
 
The use of statistical methods assumes that the authors have access to a common pool of ideas, but that there are no interactions between authors/engines. In reality, authors/engines do interact.
  
There are permissible methods by which one author can make his engine more similar to another engine. We have no standard for when some author goes too far. Thus, we have no way to determine an exact threshold.
  
The need to avoid false accusation is greater than the need to determine authors who break the rules slightly. In other words, it is better to let lesser offenders slip through than to make accusations against innocent authors.
  
This tool should not be used solely for determining derivatives and clones. Other methods should be used in conjunction with this tool. Ultimately, any accusation of cloning requires an examination of the code of the accused author. 
 
http://www.top-5000.nl/clone.htm

User avatar
Laskos
Posts: 9408
Joined: Wed Jul 26, 2006 8:21 pm
Full name: Kai Laskos

Re: Similarity tests

Post by Laskos » Wed Oct 01, 2014 3:27 pm

SzG wrote:Reading recent posts I have got the impression that the similarity tool is regarded as a reliable tool for deciding if an engine is original or not. As far as I can remember, at its birth it was stated expressly that on its own it is not suitable for that purpose.
This is just a reminder to the community not to commit the error of judging everything by this tool alone.
It could give false negatives, as Ed showed, but I have not seen any false positive. So:

1/ If it passes Sim test, that may mean nothing.
2/ If it doesn't pass the Sim test, that means it's a clone or a derivative.

IWB
Posts: 1539
Joined: Thu Mar 09, 2006 1:02 pm

Re: Similarity tests

Post by IWB » Wed Oct 01, 2014 3:37 pm

Laskos wrote:
SzG wrote:Reading recent posts I have got the impression that the similarity tool is regarded as a reliable tool for deciding if an engine is original or not. As far as I can remember, at its birth it was stated expressly that on its own it is not suitable for that purpose.
This is just a reminder to the community not to commit the error of judging everything by this tool alone.
It could give false negatives, as Ed showed, but I have not seen any false positive. So:

1/ If it passes Sim test, that may mean nothing.
2/ If it doesn't pass the Sim test, that means it's a clone or a derivative.
I believe that you haven't seen a false positive, but that doesn't mean there are non. (Because you haven't seen a black swan it doesnt mean there are non.)

I thing if it pass and you suspect something you have to have a closer look. If it doesn't pass you have to have a closer look - even if you doesn't suspect something ...

Bye
Ingo

Uri Blass
Posts: 8553
Joined: Wed Mar 08, 2006 11:37 pm
Location: Tel-Aviv Israel

Re: Similarity tests

Post by Uri Blass » Wed Oct 01, 2014 3:37 pm

Laskos wrote:
SzG wrote:Reading recent posts I have got the impression that the similarity tool is regarded as a reliable tool for deciding if an engine is original or not. As far as I can remember, at its birth it was stated expressly that on its own it is not suitable for that purpose.
This is just a reminder to the community not to commit the error of judging everything by this tool alone.
It could give false negatives, as Ed showed, but I have not seen any false positive. So:

1/ If it passes Sim test, that may mean nothing.
2/ If it doesn't pass the Sim test, that means it's a clone or a derivative.
Of course you do not see any false positive because if you see positive you assume that it is a true positive.

There is no way to refute the claim that there are no false positives if you assume that every positive is a clone or a derivative.

How do you prove that B is not a derivative of A?

User avatar
Laskos
Posts: 9408
Joined: Wed Jul 26, 2006 8:21 pm
Full name: Kai Laskos

Re: Similarity tests

Post by Laskos » Wed Oct 01, 2014 3:53 pm

Uri Blass wrote:
Laskos wrote:
SzG wrote:Reading recent posts I have got the impression that the similarity tool is regarded as a reliable tool for deciding if an engine is original or not. As far as I can remember, at its birth it was stated expressly that on its own it is not suitable for that purpose.
This is just a reminder to the community not to commit the error of judging everything by this tool alone.
It could give false negatives, as Ed showed, but I have not seen any false positive. So:

1/ If it passes Sim test, that may mean nothing.
2/ If it doesn't pass the Sim test, that means it's a clone or a derivative.
Of course you do not see any false positive because if you see positive you assume that it is a true positive.

There is no way to refute the claim that there are no false positives if you assume that every positive is a clone or a derivative.

How do you prove that B is not a derivative of A?
By other circumstantial evidence. All the positives, were positives of open source Fruit, after Fruit, positives of open source Strelka, after Strelka, positives of open source Ippo, after Ippo, positives pf open source SF, after SF.

There is no benefit of doubt in these cases. Show me a single closed source engine, which, being prior to the open source one different engine, is a positive with that later open source engine.

User avatar
Laskos
Posts: 9408
Joined: Wed Jul 26, 2006 8:21 pm
Full name: Kai Laskos

Re: Similarity tests

Post by Laskos » Wed Oct 01, 2014 4:42 pm

IWB wrote:
Laskos wrote:
SzG wrote:Reading recent posts I have got the impression that the similarity tool is regarded as a reliable tool for deciding if an engine is original or not. As far as I can remember, at its birth it was stated expressly that on its own it is not suitable for that purpose.
This is just a reminder to the community not to commit the error of judging everything by this tool alone.
It could give false negatives, as Ed showed, but I have not seen any false positive. So:

1/ If it passes Sim test, that may mean nothing.
2/ If it doesn't pass the Sim test, that means it's a clone or a derivative.
I believe that you haven't seen a false positive, but that doesn't mean there are non. (Because you haven't seen a black swan it doesnt mean there are non.)

I thing if it pass and you suspect something you have to have a closer look. If it doesn't pass you have to have a closer look - even if you doesn't suspect something ...

Bye
Ingo
With swans it's a bit different. All black swans I saw were dyed former white swans. No naturally black swans were observed. If I see again a black swan, the reasonable assumption is that it's a former white swan.

IWB
Posts: 1539
Joined: Thu Mar 09, 2006 1:02 pm

Re: Similarity tests

Post by IWB » Wed Oct 01, 2014 5:02 pm

Laskos wrote: With swans it's a bit different. All black swans I saw were dyed former white swans. No naturally black swans were observed. If I see again a black swan, the reasonable assumption is that it's a former white swan.
http://en.wikipedia.org/wiki/Black_swan

That is the problem with assumptions!


Bye
Ingo

Jouni
Posts: 1976
Joined: Wed Mar 08, 2006 7:15 pm

Re: Similarity tests

Post by Jouni » Wed Oct 01, 2014 5:05 pm

I have feeling, that all programs which score significantly better than Stockfish in tactical test are Ippolit based! I don't remember any exception so far. Also Equinox is probably based heavily on Ippo.
Jouni

Frank Quisinsky
Posts: 4852
Joined: Wed Nov 18, 2009 6:16 pm
Location: Trier, Germany
Contact:

Re: Similarity tests

Post by Frank Quisinsky » Wed Oct 01, 2014 5:10 pm

Hi Gabor,

at first ... if I read your name I am thinking each time on the good and old Winboard times. You are a Winboard icon!

Yes, thinking the same ...
An logical statement gave the Critter programmer in TalkChess.

http://talkchess.com/forum/viewtopic.ph ... 71&t=39577

All isn't easy but each time interesting if the programmer of such an "critical engine" don't give us exactly information. If so ... in 95% a derivative or clone engine (experience). Most are to see in the styles of the engines.

The tool is good (nice to have) but all should be see in combination with other facts.

Best
Frank
I like computer chess!

Post Reply