Laskos wrote: ↑Fri Nov 23, 2018 8:24 pm
To check the truthfulness of 2300+ Elo difference, even 3 games can be more than enough for high confidence of its falsification.
Except that there's no 2000 elo difference, the LC0 self-play elo graph is just accumulated error at best.
I wonder why people play 4, 10, 20 games between engines of similar strength and draw conclusions based on that.
Engine devs play (tens of) thousands per patch. There's no shortcut unless you have an oracle.
Laskos wrote: ↑Fri Nov 23, 2018 8:24 pm
To check the truthfulness of 2300+ Elo difference, even 3 games can be more than enough for high confidence of its falsification.
Except that there's no 2000 elo difference, the LC0 self-play elo graph is just accumulated error at best.
I wonder why people play 4, 10, 20 games between engines of similar strength and draw conclusions based on that.
Engine devs play (tens of) thousands per patch. There's no shortcut unless you have an oracle.
No, I misread that post, I thought that he was comparing two nets of the same 30xxx run, and 3 games can be enough to show that their self-Elo is a bogus number, almost arbitrary.
Laskos wrote: ↑Fri Nov 23, 2018 8:24 pm
To check the truthfulness of 2300+ Elo difference, even 3 games can be more than enough for high confidence of its falsification.
Except that there's no 2000 elo difference, the LC0 self-play elo graph is just accumulated error at best.
I wonder why people play 4, 10, 20 games between engines of similar strength and draw conclusions based on that.
Engine devs play (tens of) thousands per patch. There's no shortcut unless you have an oracle.
Normally you would be right
But there is a problem: if on 4 games an engine wins 3 and draws one it is obvious that there is a big difference between the two engines
Similar engines give many draws, this is not happen here
If I make 4 games between Stockfish and Arasan and the first one wins three games, do you think it is not reliable as an approximate test?
No, I don't think that's realiable - this can happen quite easily (again, speaking of engines of similar strength, not hundreds of elo gap)
It took me 10 seconds to find 4 games where two versions of my engine scored 3 1/2 - 1/2 in self-play, but after 10k it was a wash and within error bars.
I see this all the time in tournaments with engines of similar strength, in one tournament you score 40% or less, in another 60% against the same opponent etc.
The problem is small number of samples, that's all.
mar wrote: ↑Fri Nov 23, 2018 10:19 pm
No, I don't think that's realiable - this can happen quite easily (again, speaking of engines of similar strength, not hundreds of elo gap)
It took me 10 seconds to find 4 games where two versions of my engine scored 3 1/2 - 1/2 in self-play, but after 10k it was a wash and within error bars.
I see this all the time in tournaments with engines of similar strength, in one tournament you score 40% or less, in another 60% against the same opponent etc.
The problem is small number of samples, that's all.
Your self play TC was 1min per game or similar, his games were 30sec per move. Due to draw rate you might have higher error margins with 20 super fast self-play games than with 4 30sec/move games.
In his case if nets were really of equal strength draw probability could be easily 80%. So probability of 3 wins out of 4 games for one engine in case of engines of equal strength would be like 0.1%.
So 4 games could be indeed more than sufficient to prove with almost 100% certainty that engine A is stronger than engine B.
Ofc one would need to have some knowledge of statistics which doesn't seem to be your case...
custom openings that may exaggerate Elo differences ( due to the unbalance nature of the openings )..
Hmm, I read this forum post just before posting this. According to his test with decent hardware ( that would closely reflect performance in Tcec or CCCC) the estimate is -200 elo below latest SF, whereas best 11248 is known to be below -100 elo.(speed ratio 1:1000). And also, in your slow GPU or very short time control, you are testing mostly the strength of policy heads because the value net (MCTS dont have a good chance to correct the mistakes done by policy head) https://ibb.co/eq90FV
custom openings that may exaggerate Elo differences ( due to the unbalance nature of the openings )..
Hmm, I read this forum post just before posting this. According to his test with decent hardware ( that would closely reflect performance in Tcec or CCCC) the estimate is -200 elo below latest SF, whereas best 11248 is known to be below -100 elo.(speed ratio 1:1000). And also, in your slow GPU or very short time control, you are testing mostly the strength of policy heads because the value net (MCTS dont have a good chance to correct the mistakes done by policy head) https://ibb.co/eq90FV
20Mnpmove close to TCEC performance for SFdev???
Didn't know TCEC used TC of 10''+0.1''.
mar wrote: ↑Fri Nov 23, 2018 10:19 pm
No, I don't think that's realiable - this can happen quite easily (again, speaking of engines of similar strength, not hundreds of elo gap)
It took me 10 seconds to find 4 games where two versions of my engine scored 3 1/2 - 1/2 in self-play, but after 10k it was a wash and within error bars.
I see this all the time in tournaments with engines of similar strength, in one tournament you score 40% or less, in another 60% against the same opponent etc.
The problem is small number of samples, that's all.
Your self play TC was 1min per game or similar, his games were 30sec per move. Due to draw rate you might have higher error margins with 20 super fast self-play games than with 4 30sec/move games.
In his case if nets were really of equal strength draw probability could be easily 80%. So probability of 3 wins out of 4 games for one engine in case of engines of equal strength would be like 0.1%.
So 4 games could be indeed more than sufficient to prove with almost 100% certainty that engine A is stronger than engine B.
Ofc one would need to have some knowledge of statistics which doesn't seem to be your case...
People with a weird knowledge of statistics may have heard of the German Tank Problem which is not exactly the same, but has the similarity of estimating a total number of things when you have only four of the things (as in this case). This particular chess score problem is made much easier when you only want to prove A better than B rather than A better than B by 200ELO.
custom openings that may exaggerate Elo differences ( due to the unbalance nature of the openings )..
Hmm, I read this forum post just before posting this. According to his test with decent hardware ( that would closely reflect performance in Tcec or CCCC) the estimate is -200 elo below latest SF, whereas best 11248 is known to be below -100 elo.(speed ratio 1:1000). And also, in your slow GPU or very short time control, you are testing mostly the strength of policy heads because the value net (MCTS dont have a good chance to correct the mistakes done by policy head) https://ibb.co/eq90FV
20Mnpmove close to TCEC performance for SFdev???
Didn't know TCEC used TC of 10''+0.1''.
Among testers, his hardware setup is most similar to those TCEC/CCCC. I think the average speeds of Lco and SF in last CCCC was around 40knps vs 80 MNps(1:2000), that would add another -50 elo gap between Lco and SF.