Long time ago, I expanded a little a chart provided by Adam (some numbers differ a little). According to my post, and taking the data of 58.61% of similarity with Rybka 2.3.2a (the highest rate of similarity in Murka 3):
It is up to you interpreting those numbers. The fact is that the Elo gain from Murka 2 to Murka 3 are some hundreds of Elo (still untested) in a lapse of 19 months more less (approximated date releases of Murka 2 and Murka 3). (Elo gain)/week ~ [Elo(Murka 3) - Elo(Murka 2)]/83, possibly greater than 5, which is a high number in those stages of development (Murka 2 was not a random generator move...). Anyway, I am not totally sure about its status.
Not obviously a clone IMO although a few search parameters are the same as Stockfish. The search is not that similar to Stockfish nor is it similar to the Ippolit series. The eval looks original to me but I have not studied it much.
Really, IMO it does make sense to run a similarity tester on a program for which source is available. Looking at the source is a much more reliable guide to what it is similar to.
jdart wrote:
Really, IMO it does make sense to run a similarity tester on a program for which source is available. Looking at the source is a much more reliable guide to what it is similar to.
Not obviously a clone IMO although a few search parameters are the same as Stockfish. The search is not that similar to Stockfish nor is it similar to the Ippolit series. The eval looks original to me but I have not studied it much.
Really, IMO it does make sense to run a similarity tester on a program for which source is available. Looking at the source is a much more reliable guide to what it is similar to.
--Jon
Looking at the source code should be more reliable than using the similarity test. Though the similarity test seemingly has not produced any false positives, that does not mean it could not occur. Matching source code to source code does not have that flaw. Anybody who is familiar with the code of the major open source engines could do a more definitive job of determining if a particular open source engine is a derivative or not.
Having said that, the similarity test would identify the engine to compare the code to. In this case, if someone is truly worried that Murka 3 is a derivative then they should compare it to Strelka's code.
Not obviously a clone IMO although a few search parameters are the same as Stockfish. The search is not that similar to Stockfish nor is it similar to the Ippolit series. The eval looks original to me but I have not studied it much.
Really, IMO it does make sense to run a similarity tester on a program for which source is available. Looking at the source is a much more reliable guide to what it is similar to.
--Jon
Looking at the source code should be more reliable than using the similarity test. Though the similarity test seemingly has not produced any false positives, that does not mean it could not occur. Matching source code to source code does not have that flaw. Anybody who is familiar with the code of the major open source engines could do a more definitive job of determining if a particular open source engine is a derivative or not.
Having said that, the similarity test would identify the engine to compare the code to. In this case, if someone is truly worried that Murka 3 is a derivative then they should compare it to Strelka's code.
Nobody mentioned that Belka is close and from the same author.
It is extremely tedious to cross check a program with all the available open source programs. Not to mention that nowadays people peek at closed ones, so this task become nearly impossible. So, having the ability to narrow the field to compare one program with one or a couple speeds up the process tremendously.
Doesn't this type of similarity test only detect similarities in the evaluation and not the search?
And as engines get better wouldn't the similarity of moves also increase. For example if two engines could play the Philidor endgame they are more likely to play the same moves in the appropriate positions, even though they might not share anything in common.
Steve Maughan wrote:Maybe this is a topic for another thread but...
Doesn't this type of similarity test only detect similarities in the evaluation and not the search?
Mostly in the evaluation, but not exclusively.
And as engines get better wouldn't the similarity of moves also increase. For example if two engines could play the Philidor endgame they are more likely to play the same moves in the appropriate positions, even though they might not share anything in common.
Or am I missing something?
Positions are chosen such as to not have a single best move. No, as engines get stronger, they will not converge to have accidentally higher similarity.