Page 2 of 2

Re: SF-McBrain v3.0 TCEC-X RELEASE

Posted: Sun Oct 22, 2017 6:09 pm
by MikeB
tpoppins wrote:How do you get +/-11 error bars with just 126 games? It's double that with Bayeselo and higher still with Elostat.
Well it depends on the settings right ?

This what I chose to use just out of habit, ymmv:

Code: Select all

Mac-Pro:~/cluster.mfb] michaelbyrne% bay 
version 0058, Copyright (C) 1997-2016 Remi Coulom and updated by Michael Byrne.
compiled Jul 24 2016 00:03:35.
This program comes with ABSOLUTELY NO WARRANTY.
This is free software, and you are welcome to redistribute it
under the terms and conditions of the GNU General Public License.
See http://www.gnu.org/copyleft/gpl.html for details.

ResultSet>rp /Users/michaelbyrne/cluster.mfb/10212029.pgn 
126 game(s) loaded
ResultSet>elo
ResultSet-EloRating>mm 1 1
Iteration 100: 0.00135696 
00:00:00,00
ResultSet-EloRating>covariance
ResultSet-EloRating>r
Rank Name                      Rating   Δ     +    -     #     Σ    Σ%     W    L    D   W%    =%   OppR 
---------------------------------------------------------------------------------------------------------
   1 SF-McBrain v3.0 TCEC-T2    3106   0.0   11   11   126   66.0  52.4   20   14   92  15.9  73.0  3094 
   2 Stockfish 151017 64 POPC   3094  12.2   11   11   126   60.0  47.6   14   20   92  11.1  73.0  3106 
---------------------------------------------------------------------------------------------------------
  Δ = delta from the next higher rated opponent
  # = number of games played
  Σ = total score, 1 point for win, 1/2 point for draw

ResultSet-EloRating>los
                          SF St
SF-McBrain v3.0 TCEC-T2      86
Stockfish 151017 64 POPC  13   
ResultSet-EloRating>
A custom bayeselo - modified the output as shown above and added some keyboard command shortcuts , rp=readpgn, r=ratings etc

Thinking about it, since I play both sides with white and black I should be using "mm 01" and the results would look like this

Code: Select all

ResultSet>rp /Users/michaelbyrne/cluster.mfb/10212029.pgn
126 game(s) loaded
ResultSet>elo
ResultSet-EloRating>mm 0 1
00:00:00,00
ResultSet-EloRating>covariance
ResultSet-EloRating>r  
Rank Name                      Rating   Δ     +    -     #     Σ    Σ%     W    L    D   W%    =%   OppR 
---------------------------------------------------------------------------------------------------------
   1 SF-McBrain v3.0 TCEC-T2    3108   0.0   15   15   126   66.0  52.4   20   14   92  15.9  73.0  3092 
   2 Stockfish 151017 64 POPC   3092  16.1   15   15   126   60.0  47.6   14   20   92  11.1  73.0  3108 
---------------------------------------------------------------------------------------------------------
  Δ = delta from the next higher rated opponent
  # = number of games played
  Σ = total score, 1 point for win, 1/2 point for draw

ResultSet-EloRating>los
                          SF St
SF-McBrain v3.0 TCEC-T2      85
Stockfish 151017 64 POPC  14   
ResultSet-EloRating>
with that said, I don't take these numbers seriously and neither should you or anyone else, simply not enough games.

https://github.com/MichaelB7/bayeselo

Re: SF-McBrain v3.0 TCEC-X RELEASE

Posted: Mon Oct 23, 2017 2:25 am
by ernest
MikeB wrote:
tpoppins wrote:How do you get +/-11 error bars with just 126 games? It's double that with Bayeselo and higher still with Elostat.
with that said, I don't take these numbers seriously and neither should you or anyone else
Well, Mike, you are respected for your program and achievements.

So do not spoil it by stating completely wrong error-bars.
Error-bars are important... Get the correct formulas for them ! 8-)

Re: SF-McBrain v3.0 TCEC-X RELEASE

Posted: Mon Oct 23, 2017 3:15 am
by MikeB
ernest wrote:
MikeB wrote:
tpoppins wrote:How do you get +/-11 error bars with just 126 games? It's double that with Bayeselo and higher still with Elostat.
with that said, I don't take these numbers seriously and neither should you or anyone else
Well, Mike, you are respected for your program and achievements.

So do not spoil it by stating completely wrong error-bars.
Error-bars are important... Get the correct formulas for them ! 8-)
HI Ernest - I am happy to learn from someone who knows more than me - the formulas are Remi Coulon's who I understand is very well respected in this field. I'm using his formulas - my only input is "read pgn, elo, mm 1 1, covariance , ratings" . I'm not an expert - but I picked up "mm 1 1" and "covariance' from someone else who i respect greatly - perhaps they are not correct. If somebody who knows more about this than I do would be willing to share , I would be willing to learn.

Re: SF-McBrain v3.0 TCEC-X RELEASE

Posted: Mon Oct 23, 2017 4:14 am
by ernest
Ok Mike, maybe I will ask Remi (who is a friend...).

But something seems fishy in these BayesElo numbers.

Perhaps Kai Laskos or other specialist will read this and comment.

Re: SF-McBrain v3.0 TCEC-X RELEASE

Posted: Mon Oct 23, 2017 4:57 am
by MikeB
MikeB wrote:All flavors of binaries available - a huge call out to Dann Corbit , Lucas Monge and John Stanback for providing the binaries. Details and binaries at the link below.

https://github.com/MichaelB7/Stockfish/releases/tag/3.0

This will push total downloads to over 4,000:

http://www.somsubhra.com/github-release ... =Stockfish
Update: The tactical parameter is currently non functional, it will be made functional in the next release.

Re: SF-McBrain v3.0 TCEC-X RELEASE

Posted: Mon Oct 23, 2017 7:57 am
by peter
MikeB wrote:
MikeB wrote:Update: The tactical parameter is currently non functional, it will be made functional in the next release.
You mean it does nothing at all if changed
:?:

Re: SF-McBrain v3.0 TCEC-X RELEASE

Posted: Mon Oct 23, 2017 9:56 am
by Volker Pittlik
MikeB wrote:...
Update: The tactical parameter is currently non functional, it will be made functional in the next release.
There is also something strange with the UCI-LimitStrength option. It seems to work fine in xboard. When enabled it is possible to set the UCI_ELO.

In cutechess there is only a UCI-LimitStrength and no option to control the UCI_ELO. If UCI-LimitStrength is enabled Brainfish always runs at ~35 nps. I also couldn't find out how the UCI_ELO could be set using cuteches-cli's engines.json. Possibly there is something wrong in cutechess.

Volker

Re: SF-McBrain v3.0 TCEC-X RELEASE

Posted: Mon Oct 23, 2017 6:23 pm
by Jamal Bubker
Thanks a lot Michael, Dann, John and Lucas :D :D :D :D

Re: SF-McBrain v3.0 TCEC-X RELEASE

Posted: Mon Oct 23, 2017 8:45 pm
by Vinvin
Thanks, Michael !
Keep up your good work !