It tells that even clubs that are considered more-orl-less equal can have large rating differences. You can see teams and players weaker than 100 elo beat their stronger counterparts quite often.I'm not sure what football results tell us about chess, but a hundred elo represents quite a large win to loss ratio in chess when you are talking about grandmasters. Anyway it doesn't matter what causes the zigzag line for human play; unless you have enough data to make the connecting line look somewhat like a line or a curve, it's pretty hard to guess what its real shape would be.
Removing the effect of the instability of human play would requite a large amaount of moves, probably far over 1000, which is impractical for me. I write results into spreadsheets by hand.
In 2009 I produced the following study, where the elo vs accuracy graph had 200-elo difference between cohorts. There were no zigzags.
http://www.chessanalysis.ee/summary450.pdf
My next study will have 150-elo gaps with the span of 2800-1750.
Thanks you for the formula, I had no idea this actually existed. I'm going to use that alongsidewith the old good centipawn method and compare them. If the expected score method really is superior, then the coefficient of determination value of rating vs accuracy trend lines should be bigger than in the case of the cps.You got wrong cut-offs and asymptotic behavior with centipawns. All three, Houdini, Komodo and Stockfish obey pretty same logistic curve while transforming centipawns (cp) to expected score:
Code:
p=cp/100
a=1.1 normalization factor
ExpectedScore = {1 + (Exp[p/a] - Exp[-p/a])/(Exp[p/a] + Exp[-p/a])}/2
You would get more reliable fit using this approximation with correct asymptotic behavior.