Page 1 of 5

1 draw=1 win + 1 loss (always!)

Posted: Thu Sep 19, 2013 10:28 am
by Michel
A while ago HGM wrote
Now for the Logistic F(x-d)*F(-x-d) happens to be the same function (deviation O(d^2)) as 1-F(x-d)-F(-x-d). Thus when BayesElo is maximizing the total likelihood for the ratings (which also contains factors for the likelihood of all other results), the win+loss will give exactly the same contribution to the likelihood as the single draw.
Actually I noticed that this relation holds _exactly_ (not just up to O(d^2)).

In the BayesElo model one has the identity

P(draw)=c*P(win)*P(loss) (*)

where c=10**(d/200)-1

In fact one may think of (*) as a characterizing property of the BE model (for two
players).

Probably Larry Kauffman would prefer to see

P(draw)**2=c*P(win)*P(loss)

Again in the case of two players this model is characterized by the value of c.

Re: 1 draw=1 win + 1 loss (always!)

Posted: Thu Sep 19, 2013 12:19 pm
by Daniel Shawul
Whats new here? To be more precise it is only the Rao-Kupper draw model that says 1 draw = 1 win + 1 loss.
Probably Larry Kauffman would prefer to see

P(draw)**2=c*P(win)*P(loss)
I don't see how you arrived at this from the previous relation. This I am afraid is a different draw model, name Davidson, not the one used currently by bayeselo. Infact it say 2 draws = 1 win + 1 loss. There is a third model that say 1.5 draws = 1 win + 1 loss. More from http://www.grappa.univ-lille3.fr/~coulo ... tcomes.pdf

Re: 1 draw=1 win + 1 loss (always!)

Posted: Thu Sep 19, 2013 12:41 pm
by Michel
Ah! I did not know that reference. Thanks.

So the model that would be preferred by Larry Kaufman is the Davidson model. It does satisfy the relation

P(draw)**2=c*P(win)*P(loss)

(for two players it does not make sense to talk about an elo model).

Re: 1 draw=1 win + 1 loss (always!)

Posted: Thu Sep 19, 2013 5:06 pm
by AlvaroBegue
That .pdf file looks like an early draft that should have never seen the light... Is the final paper available anywhere?

Re: 1 draw=1 win + 1 loss (always!)

Posted: Thu Sep 19, 2013 6:48 pm
by Daniel Shawul
AlvaroBegue wrote:That .pdf file looks like an early draft that should have never seen the light... Is the final paper available anywhere?
Yeah but you can't deny the quality beginning :) I tried to fill up the missing sections some months ago with my broken English and all, but hasn't moved on since then. I just contacted Remi about it and times have changed for him too, but I am planning to finish it up and get it published somewhere for once! Well if you are interested I can send you a better version of the pdf which you can read and help me as well by giving feedback. Let me know.

Re: 1 draw=1 win + 1 loss (always!)

Posted: Sat Sep 21, 2013 12:58 pm
by Rémi Coulom
Daniel Shawul wrote:
AlvaroBegue wrote:That .pdf file looks like an early draft that should have never seen the light... Is the final paper available anywhere?
Yeah but you can't deny the quality beginning :) I tried to fill up the missing sections some months ago with my broken English and all, but hasn't moved on since then. I just contacted Remi about it and times have changed for him too, but I am planning to finish it up and get it published somewhere for once! Well if you are interested I can send you a better version of the pdf which you can read and help me as well by giving feedback. Let me know.
http://www.grappa.univ-lille3.fr/~coulo ... tcomes.pdf

I updated that file with the current draft. It contains some data. We initially planned to finish the paper for the CG'2013 conference in Yokohama, but did not finish it in time. There is some writing left to do, but the data is there.

Rémi

Re: 1 draw=1 win + 1 loss (always!)

Posted: Sat Sep 21, 2013 3:58 pm
by Rein Halbersma
Rémi Coulom wrote:
Daniel Shawul wrote:
AlvaroBegue wrote:That .pdf file looks like an early draft that should have never seen the light... Is the final paper available anywhere?
Yeah but you can't deny the quality beginning :) I tried to fill up the missing sections some months ago with my broken English and all, but hasn't moved on since then. I just contacted Remi about it and times have changed for him too, but I am planning to finish it up and get it published somewhere for once! Well if you are interested I can send you a better version of the pdf which you can read and help me as well by giving feedback. Let me know.
http://www.grappa.univ-lille3.fr/~coulo ... tcomes.pdf

I updated that file with the current draft. It contains some data. We initially planned to finish the paper for the CG'2013 conference in Yokohama, but did not finish it in time. There is some writing left to do, but the data is there.

Rémi
Re: drawing percentage as a function of strength, this seems to follow naturally from a statistical model where two players make moves with a small probability of an error. Maybe the value of the current position can then be modelled to follow a random walk with a drift proportional to the difference of the cumulative errors of both players. Once the position value crosses a threshold, a win or loss is realized.

Re: 1 draw=1 win + 1 loss (always!)

Posted: Sat Sep 21, 2013 4:07 pm
by Rémi Coulom
Rein Halbersma wrote:Re: drawing percentage as a function of strength, this seems to follow naturally from a statistical model where two players make moves with a small probability of an error. Maybe the value of the current position can then be modelled to follow a random walk with a drift proportional to the difference of the cumulative errors of both players. Once the position value crosses a threshold, a win or loss is realized.
This sounds very much like the Glenn-David model. The distribution of a sum of iid random values has the shape of a Gaussian.

Re: 1 draw=1 win + 1 loss (always!)

Posted: Sat Sep 21, 2013 4:23 pm
by lkaufman
Daniel Shawul wrote:Whats new here? To be more precise it is only the Rao-Kupper draw model that says 1 draw = 1 win + 1 loss.
Probably Larry Kauffman would prefer to see

P(draw)**2=c*P(win)*P(loss)
I don't see how you arrived at this from the previous relation. This I am afraid is a different draw model, name Davidson, not the one used currently by bayeselo. Infact it say 2 draws = 1 win + 1 loss. There is a third model that say 1.5 draws = 1 win + 1 loss. More from http://www.grappa.univ-lille3.fr/~coulo ... tcomes.pdf
Well it's nice that there are so many draw models to choose from. But has there been any work on determining which one actually fits data from the game of chess? Clearly the Davidson model is the one that fits the way tournaments are scored and Elo ratings are calculated, so the burden of proof lies with anyone claiming that one of the other models is superior.

Re: 1 draw=1 win + 1 loss (always!)

Posted: Sat Sep 21, 2013 5:00 pm
by Daniel Shawul
lkaufman wrote:
Daniel Shawul wrote:Whats new here? To be more precise it is only the Rao-Kupper draw model that says 1 draw = 1 win + 1 loss.
Probably Larry Kauffman would prefer to see

P(draw)**2=c*P(win)*P(loss)
I don't see how you arrived at this from the previous relation. This I am afraid is a different draw model, name Davidson, not the one used currently by bayeselo. Infact it say 2 draws = 1 win + 1 loss. There is a third model that say 1.5 draws = 1 win + 1 loss. More from http://www.grappa.univ-lille3.fr/~coulo ... tcomes.pdf
Well it's nice that there are so many draw models to choose from. But has there been any work on determining which one actually fits data from the game of chess?
Yes this one.
Clearly the Davidson model is the one that fits the way tournaments are scored and Elo ratings are calculated, so the burden of proof lies with anyone claiming that one of the other models is superior.
'Clearly' you don't know what you are talking about which I pointed out before when you tried to tell us how ordo is better than bayeselo (but in reality you made a mistake with understanding scale). You should give citations as to why 2 * 0 = 1 + -1 (davidson) is 'clearly' better than 0 = 1 + -1 (rao-kupper). Infact most use the Glenn-David model 1.5 * 0 = 1 + -1 such as microsoft's rating software. There is even a work that says Davidson is worse for _human_ ratings, so you as a human GM should 'clearly' have knowledge of such things. You can find such statements here. "Joe, H. (1990). Extended use of paired comparion models, with applications to chess rankings. Journal of the Royal Statistical Society, 39(1):85–93." I think you should let your pal Glickman to judge the merit of such works, because 'clearly' you are not qualified, and 'clearly' he is.