UCI_Elo

Discussion of anything and everything relating to chess playing software and machines.

Moderators: bob, hgm, Harvey Williamson

Forum rules
This textbox is used to restore diagrams posted with the [d] tag before the upgrade.
Ferdy
Posts: 4075
Joined: Sun Aug 10, 2008 1:15 pm
Location: Philippines

Re: UCI_Elo

Post by Ferdy » Fri Jul 12, 2019 6:21 pm

Danasah human is around 1300.

Code: Select all

   # PLAYER                              :  RATING  ERROR  POINTS  PLAYED   (%)
   1 Amyan 1.72 ucielo 1500              :  2351.6  137.5   112.0     132    85
   2 Cheese 2.1 ucielo 1500              :  2340.2  132.6   111.0     132    84
   3 Cheng 4.39 ucielo 1500              :  2329.1  132.5   110.0     132    83
   4 Fruit reloaded v3.21 ucielo 1500    :  2311.6  130.9   106.5     130    82
   5 Ufim v8.02 ucielo 1500              :  2146.3  118.6    99.5     146    68
   6 Rhetoric 1.4.3 ucielo 1500          :  2112.5  120.9    86.0     130    66
   7 DanaSah 7.9 ucielo 1500             :  2101.7  116.2    79.5     132    60
   8 MadChess 2.2 ucielo 1500            :  2088.8  115.2    92.0     146    63
   9 Houdini 3 ucielo 1500               :  2063.8  128.7    81.5     112    73
  10 D2019.2.37.53 ucielo 1500           :  2019.8  114.7    77.5     132    59
  11 Discocheck 5.2 ucielo 1500          :  1848.6  111.2    59.5     132    45
  12 Iota 1.0 ccrl 1019                  :  1821.1  158.6    15.5      46    34
  13 CT800 V1.34 ucielo 1500             :  1758.5  110.0    53.0     148    36
  14 Arasan 21.3 ucielo 1500             :  1662.1  112.8    41.5     132    31
  15 Hiarcs 14 ucielo 1500               :  1510.6  113.7    28.5     146    20
  16 NSVChess v0.14 ccrl 946             :  1500.0   ----    21.0     212    10
  17 DanaSah 7.9 human ucielo 1500       :  1278.1  156.1     7.5     208     4


User avatar
pedrox
Posts: 991
Joined: Fri Mar 10, 2006 5:07 am
Location: Basque Country (Spain)
Contact:

Re: UCI_Elo

Post by pedrox » Fri Jul 12, 2019 7:54 pm

Thanks for the results.

I had not seen anywhere that UCI_ELO refers to Elo FIDE. But it makes sense than when a user uses limit strenght is to play against the engine and in that case offer an Elo FIDE (Although I have also used my engine to deal with dedicated machines of the 80-90s). I will make the "human" version as the default version and I will study how to do the other options.

I think the "engine" version played more or less at the level I expected, the "human" version made it much lower than I expected. I will try to increase the strength for this mode by 200 Elo points.

In my engine, I could make a force adjustment by changing values in the configuration options. For example:

Diff engine = 50
Diff computer-engine = 350
Diff human-computer = 70

With these values it is possible that the engine in the "human" mode played something like 1500. But I will have to check if this is and if the regulation then works for other values.

Ferdy
Posts: 4075
Joined: Sun Aug 10, 2008 1:15 pm
Location: Philippines

Re: UCI_Elo

Post by Ferdy » Fri Jul 12, 2019 11:46 pm

According to uci protocol, UCI_Elo refers to Elo, since there is no other popular chess Elo than FIDE Elo, I believe this is FIDE Elo. Mark the author of Hiarcs is probably aware of this, his engine at 1500 uci elo is close.

Ferdy
Posts: 4075
Joined: Sun Aug 10, 2008 1:15 pm
Location: Philippines

Re: UCI_Elo

Post by Ferdy » Sat Jul 13, 2019 5:05 pm

A sample game against Danasah using the chess GUI that I have been developing, featuring 2 TC's one with time delay. Danasah played at TC 5min+10s (Fischer), and I am on TC 5min-10s (10s delay).

The pgn source has clk or clock showing the time remaining after a move. Press C8 on the board.




And a game with Arasan

MikeB
Posts: 3386
Joined: Thu Mar 09, 2006 5:34 am
Location: Pen Argyl, Pennsylvania

Re: UCI_Elo

Post by MikeB » Sat Jul 13, 2019 5:41 pm

pedrox wrote:
Fri Jul 12, 2019 7:54 pm
Thanks for the results.

I had not seen anywhere that UCI_ELO refers to Elo FIDE. But it makes sense than when a user uses limit strenght is to play against the engine and in that case offer an Elo FIDE (Although I have also used my engine to deal with dedicated machines of the 80-90s). I will make the "human" version as the default version and I will study how to do the other options.

I think the "engine" version played more or less at the level I expected, the "human" version made it much lower than I expected. I will try to increase the strength for this mode by 200 Elo points.

In my engine, I could make a force adjustment by changing values in the configuration options. For example:

Diff engine = 50
Diff computer-engine = 350
Diff human-computer = 70

With these values it is possible that the engine in the "human" mode played something like 1500. But I will have to check if this is and if the regulation then works for other values.
It is commonly accepted that Elo means something in the ball parl of FIDE Elo. Of course, many national federations have their own ratings systems and even the engine vs engine testers try have something that is supposed to align with FIDE Elo - but of course with no interaction between the universe of players between the human group of players and the engine universe of players, it is impossible to say CCRL equals FIDE, etc. We do know top players are rated around 2800 FIDE and it does appear from a distance , that an engine rated near 2800 CCRL is probably close to 2800 FIDE, but who really knows for sure. The answer is we do not know and , but we do know that is not exact - but it's probably in the range if you use very large bars - say 2800 CCRL is probably plus or minus 100 ELo of 2800 FIDE. And that of course will be true at 1500 ELO - plus or minus 100 Elo. And even my off the cuff comment here - somebody else will say , "no, it's xyz" and they could be right ..or they could be wrong. I would be shocked to find that the error bar would be more than 200 Elo off - but who knows. The very best players one the world no longer like to play the best engines in the world in public, and I I don't blame them one iota as the difference is now in the the multiple hundreds of ELO and they have almost no shot at winning even one game. Drawing a game now and then is probably the best they can do now.

Ferdy
Posts: 4075
Joined: Sun Aug 10, 2008 1:15 pm
Location: Philippines

Re: UCI_Elo

Post by Ferdy » Sun Jul 14, 2019 8:17 am

Reduce again Deuterium's nodes limit to 200, now it much closer to CT800. No other tricks added to reduce strenght so far, just the node reduction. Deuterium played on TC 3m+2s while I use TC 10m with 10s delay.

Code: Select all

   # PLAYER                                   :  RATING  ERROR  POINTS  PLAYED   (%)
   1 Amyan 1.72 ucielo 1500                   :  2298.3  118.2   120.0     140    86
   2 Cheng 4.39 ucielo 1500                   :  2281.8  117.3   118.5     140    85
   3 Cheese 2.1 ucielo 1500                   :  2245.1  118.0   115.0     140    82
   4 Fruit reloaded v3.21 ucielo 1500         :  2238.3  104.1   112.5     138    82
   5 Ufim v8.02 ucielo 1500                   :  2128.0   98.1   110.5     154    72
   6 Rhetoric 1.4.3 ucielo 1500               :  2062.9  108.5    93.5     138    68
   7 DanaSah 7.9 ucielo 1500                  :  2061.9  108.7    74.0     124    60
   8 MadChess 2.2 ucielo 1500                 :  2061.7   95.5   101.5     154    66
   9 Houdini 3 ucielo 1500                    :  2055.3  101.6    95.0     128    74
  10 Discocheck 5.2 ucielo 1500               :  1822.5  105.6    66.5     140    48
  11 Iota 1.0 ccrl 1019                       :  1822.1  121.7    24.5      58    42
  12 Deuterium v2019.2.37.53 ucielo 1500      :  1784.4   86.1    91.0     240    38
  13 CT800 V1.34 ucielo 1500                  :  1731.5   90.1    58.0     156    37
  14 Arasan 21.3 ucielo 1500                  :  1619.6   90.4    44.5     156    29
  15 Hiarcs 14 ucielo 1500                    :  1510.7  104.2    32.5     154    21
  16 NSVChess v0.14 ccrl 946                  :  1500.0   ----    25.0     212    12
  17 DanaSah 7.9 human ucielo 1500            :  1243.9  129.1     7.5     208     4

Sample game of how it played. It does not blunder material directly but you have to play with combination to outplay it.


Ferdy
Posts: 4075
Joined: Sun Aug 10, 2008 1:15 pm
Location: Philippines

Re: UCI_Elo

Post by Ferdy » Mon Jul 15, 2019 12:05 am

Reduce strength by randomizing piece values of queen and rook. Before a search is made, queen is randomize between 400 to 700 cp, while rook is between 300 and 700 cp. This is on Deuterium v2019.2.37.54 ucielo 1500. Note basic weakening is at 200 nodes a move at this 1500 Elo level plus this material randomizer.

The base version is Deuterium v2019.2.37.53 ucielo 1500 which is run at 200 nodes.

Result at TC 40/2min. It is now below CT800.

Code: Select all

   # PLAYER                                 :  RATING  ERROR  POINTS  PLAYED   (%)
   1 Amyan 1.72 ucielo 1500                 :  2290.3  118.7   135.5     156    87
   2 Cheng 4.39 ucielo 1500                 :  2279.5  131.5   134.5     156    86
   3 Cheese 2.1 ucielo 1500                 :  2234.1  118.4   130.0     156    83
   4 Fruit reloaded v3.21 ucielo 1500       :  2208.5  119.5   125.5     154    81
   5 Ufim v8.02 ucielo 1500                 :  2121.1  102.3   125.5     170    74
   6 MadChess 2.2 ucielo 1500               :  2065.4  105.2   117.5     170    69
   7 Rhetoric 1.4.3 ucielo 1500             :  2054.5  106.5   107.5     154    70
   8 DanaSah 7.9 ucielo 1500                :  2053.2  113.2    74.0     124    60
   9 Houdini 3 ucielo 1500                  :  2037.3  113.3   107.5     144    75
  10 Iota 1.0 ccrl 1019                     :  1821.9  131.7    24.5      58    42
  11 Discocheck 5.2 ucielo 1500             :  1819.8   96.7    77.5     156    50
  12 Deuterium v2019.2.37.53 ucielo 1500    :  1788.9   92.1   103.0     256    40
  13 CT800 V1.34 ucielo 1500                :  1727.9  108.0    67.0     172    39
  14 Deuterium v2019.2.37.54 ucielo 1500    :  1672.7   96.6    60.5     224    27
  15 Arasan 21.3 ucielo 1500                :  1628.3  102.4    52.5     172    31
  16 Hiarcs 14 ucielo 1500                  :  1511.7  108.3    37.0     170    22
  17 NSVChess v0.14 ccrl 946                :  1500.0   ----    25.0     212    12
  18 DanaSah 7.9 human ucielo 1500          :  1259.7  124.7     9.5     224     4

Sample game with Deuterium v2019.2.37.54 ucielo 1500. There is a bit of struggle in the opening and it keeps on weakening its squares. It gives up its queen without too much fight, perhaps this is because of the randomized queen value at [400, 700] cp.
I played at TC 10m and 10s delay, Deuterium is at TC 3m+2s.


Ferdy
Posts: 4075
Joined: Sun Aug 10, 2008 1:15 pm
Location: Philippines

Re: UCI_Elo

Post by Ferdy » Fri Jul 19, 2019 11:37 am

Collected some games played by players with Elo 1400 to 1600, based from TWIC 2018 to July 2019. This could be useful to approximate the UCI_Elo 1500 for engine authors who may wish to implement UCI_Elo on their engine or improve current implementation.

The source pgn file is cleaned and doubles are removed by pgn-extract.

7000 plus games, white has a rating from 1400 to 1600.
https://drive.google.com/file/d/1eD0a9z ... sp=sharing

7000 plus games, black has a rating from 1400 to 1600.
https://drive.google.com/file/d/1eoXGxQ ... sp=sharing

Ferdy
Posts: 4075
Joined: Sun Aug 10, 2008 1:15 pm
Location: Philippines

Re: UCI_Elo

Post by Ferdy » Tue Jul 23, 2019 1:53 pm

Latest Stockfish has UCI_Elo feature. It is now included in the test for 1500.
Also make some revisions to Deuterium, now it is limited to 300 nodes and randomize piece values for [Q, R, B and N] at a max of 50% of the time for all moves in a game.

TC 40/2m

Code: Select all

   # PLAYER                                 :  RATING  ERROR  POINTS  PLAYED   (%)
   1 Cheng 4.39 ucielo 1500                 :  2282.8  121.0   134.5     156    86
   2 Cheese 2.1 ucielo 1500                 :  2258.0  114.7   132.0     156    85
   3 Fruit reloaded v3.21 ucielo 1500       :  2233.2  113.5   127.5     154    83
   4 Amyan 1.72 ucielo 1500                 :  2220.8  113.6   128.0     156    82
   5 Ufim v8.02 ucielo 1500                 :  2109.9  102.7   123.0     170    72
   6 Rhetoric 1.4.3 ucielo 1500             :  2070.8   99.8   107.5     154    70
   7 DanaSah 7.9 ucielo 1500                :  2039.2   89.6    74.0     124    60
   8 Houdini 3 ucielo 1500                  :  2006.3  104.8   101.5     144    70
   9 MadChess 2.2 ucielo 1500               :  1994.0  112.5   104.5     170    61
  10 Deuterium v2019.2.37.59 ucielo 1500    :  1862.7   94.7   117.5     256    46
  11 Stockfish 2019.07.14 ucielo 1500       :  1842.8   96.9   112.5     256    44
  12 Discocheck 5.2 ucielo 1500             :  1795.4  104.3    69.0     156    44
  13 Iota 1.0 ccrl 1019                     :  1766.3  117.5    26.0      74    35
  14 CT800 V1.34 ucielo 1500                :  1762.9   86.2    67.5     172    39
  15 Arasan 21.3 ucielo 1500                :  1648.5  104.5    50.5     172    29
  16 Hiarcs 14 ucielo 1500                  :  1534.2   96.4    35.5     170    21
  17 NSVChess v0.14 ccrl 946                :  1500.0   ----    25.5     228    11
  18 DanaSah 7.9 human ucielo 1500          :  1295.1  132.4     9.5     224     4


Meanwhile created a test set for these UCI_Elo 1500 engines. The test positions are from human players with Elo 1450 to 1550. The main goal is to find which uci engines has the greatest number of matches in the test. The test epd would look something like this,

Code: Select all

3r2k1/p5p1/1pR4p/4R3/3r4/8/PP4PP/6K1 b - - bm Rd2; ce 0; c0 "Rd1+"; c1 "154";
That bm Rd2 is the move by a human player with an Elo rating within 1450 to 1550 from an actual game.
That ce 0 is the centipawn eval score of move Rd2 based on stockfish dev at 1sec of analysis on i7 3.4 Ghz PC.
That Rd1+ is the move preferred by stockfish dev and has a score of 154 cp. I have collected around 60k test positions.

Now to test these uci elo 1500 engines, the position is given to the engine and allow it to search at 1sec of analysis per pos. Whenever the engine bestmove and bm is the same then a match counter is incremented. Aside from the match counter, I also save how many positions are there where the bm of human is not the same to the bestmove of engine. The bestmove of engine can be stronger or weaker than human move. If engine move is stronger I record it in High counter, if engine move is weaker than that of human move I record it in Low counter. Other items are also recorded like the average difference between the move score of engine and the move score of human when the bestmove of engine and the bestmove of human are not the same.

Results on 1000 test positions.

Code: Select all

UCI_Elo 1500 engine test results on FIDE Elo 1500
Test positions are taken from players with FIDE Elo 1450 to 1550

                               Engine  Total  Match  High  Low  HACD  LACD
 Deuterium v2019.2.37.59 UCI_Elo 1500   1000    362   291  347   357   335
             Arasan 21.3 UCI_Elo 1500   1000    305   266  429   494   313
              Ufim v8.02 UCI_Elo 1500   1000    428   280  292   475   332
             CT800 V1.34 UCI_Elo 1500   1000    333   244  423   268   739
       DanaSah 7.9 Human UCI_Elo 1500   1000    332   250  418   392   577
    Stockfish 2019.07.14 UCI_Elo 1500   1000    254   263  483   493   356
              Cheng 4.39 UCI_Elo 1500   1000    408   325  267   442   329
          Discocheck 5.2 UCI_Elo 1500   1000    368   276  356   250   422
               Houdini 3 UCI_Elo 1500   1000    360   263  377   393   217
              Amyan 1.72 UCI_Elo 1500   1000    348   239  413   399   738
          Rhetoric 1.4.3 UCI_Elo 1500   1000    359   286  355   240   664
               Hiarcs 14 UCI_Elo 1500   1000    342   239  419   286   440
              Cheese 2.1 UCI_Elo 1500   1000    432   312  256   441   432

Code: Select all

::Legend::
Total: Number of test positions from human games.
Match: Count of pos, where engine and human move are the same.
High : Count of pos, where engine move is stronger than human move.
Low  : Count of pos, where engine move is weaker than human move.
HACD : High Average Centipawn Difference, engine move is stronger 
       than human move by Centipawn amount, according to Stockfish 2019.04.16.
LACD : Low Average Centipawn Difference, engine move is weaker 
       than human move by Centipawn amount, according to Stockfish 2019.04.16.
Table interpretation:
Deuterium was able to match the human move by 362 or 100*362/1000 or 36.2%. In relative comparison, the engine that got the most matches is Cheese 2.1 at 43.2%. The HACD of Deuterium is 357 or 357 cp or around 3 and a half pawns. HACD means that if Deuterium move is stronger than human move, it has a difference of 357 cp above that of human move score. In order to simulate a human play, its HACD should be minimum, of the engines tested Rhetoric has the best at 240 cp. This means that when you play against Rhetoric, it can play stronger moves at an average of 240 cp above the human move. LACD is the opposite of HACD. In LACD the engine move is weaker than human move. For Deuterium LACD is 335 cp, that would mean that when Deuterium plays a bad move, on average it gives 335 cp advantage to its opponent. Looking at the table the engine that gives away some advantage to its opponent are CCT800 at 739 cp and Amyan at 738 cp. For humans at lower rating range these engines are good to play, but be aware of its HACD values too.

So how do we rank engines that plays like humans based on human test positions?
I can list the following criteria:
1. Match (max is better)
2. High (min is better)
3. Low (max is better)
4. HACD (min is better)
5. LACD (max is better)

It seems like this is an MCDA (Multi-Criteria Decision Analysis) issue, where alternatives are ranked based on criteria. One technique to rank alternatives is by using TOPSIS.
TOPSIS ref.
https://en.wikipedia.org/wiki/TOPSIS
https://www.slideshare.net/pranavmishra ... g-approach

With that table I tried to rank those engines using TOPSIS utilizing skcriteria python module.
Here are the results with the application of weight for each criteria.
match, weight=0.6
High, weight=0.05
Low, weight=0.05
HACD, weight=0.2
LACD, weight=0.1
Total weight is 1.0. In the table there are also indications in the column if min and max is preferrable, That is my input too.

Code: Select all

TOPSIS (mnorm=vector, wnorm=sum) - Solution:
             ALT./CRIT.                Match (max) W.0.6    High (min) W.0.05    Low (max) W.0.05    HACD (min) W.0.2    LACD (max) W.0.1    Rank
------------------------------------  -------------------  -------------------  ------------------  ------------------  ------------------  ------
Deuterium v2019.2.37.59 UCI_Elo 1500          362                  291                 347                 357                 335            7
      Arasan 21.3 UCI_Elo 1500                305                  266                 429                 494                 313            12
      Ufim v8.02 UCI_Elo 1500                 428                  280                 292                 475                 332            2
      CT800 V1.34 UCI_Elo 1500                333                  244                 423                 268                 739            6
   DanaSah 7.9 Human UCI_Elo 1500             332                  250                 418                 392                 577            11
 Stockfish 2019.07.14 UCI_Elo 1500            254                  263                 483                 493                 356            13
      Cheng 4.39 UCI_Elo 1500                 408                  325                 267                 442                 329            5
    Discocheck 5.2 UCI_Elo 1500               368                  276                 356                 250                 422            4
       Houdini 3 UCI_Elo 1500                 360                  263                 377                 393                 217            10
      Amyan 1.72 UCI_Elo 1500                 348                  239                 413                 399                 738            8
    Rhetoric 1.4.3 UCI_Elo 1500               359                  286                 355                 240                 664            3
       Hiarcs 14 UCI_Elo 1500                 342                  239                 419                 286                 440            9
      Cheese 2.1 UCI_Elo 1500                 432                  312                 256                 441                 432            1
According to the weight assigned Cheese 2.1 is the best at rank #1, followed by Ufim and Rhetoric.

If you have weight values that you would like to run post it and I will try to run it.

Next I will be testing these engines at 5000 positions.

Ferdy
Posts: 4075
Joined: Sun Aug 10, 2008 1:15 pm
Location: Philippines

Re: UCI_Elo

Post by Ferdy » Wed Jul 24, 2019 2:09 am

Ferdy wrote:
Tue Jul 23, 2019 1:53 pm
Next I will be testing these engines at 5000 positions.
Tests at 5000 pos is completed.

UCI_Elo 1500 engine test results on FIDE Elo 1500
Test positions are taken from players with FIDE Elo 1450 to 1550

Code: Select all

                               Engine  Total  Match  High   Low  HACD  LACD
 Deuterium v2019.2.37.59 UCI_Elo 1500   5000   1891  1360  1749   426   284
              Ufim v8.02 UCI_Elo 1500   5000   2164  1360  1476   447   423
             CT800 V1.34 UCI_Elo 1500   5000   1627  1164  2209   329   914
             Arasan 21.3 UCI_Elo 1500   5000   1634  1178  2188   491   440
       DanaSah 7.9 Human UCI_Elo 1500   5000   1710  1195  2095   434   324
    Stockfish 2019.07.14 UCI_Elo 1500   5000   1304  1231  2465   443   390
              Cheng 4.39 UCI_Elo 1500   5000   2141  1527  1332   427   144
          Discocheck 5.2 UCI_Elo 1500   5000   1947  1308  1745   380   445
               Houdini 3 UCI_Elo 1500   5000   1704  1269  2027   427   174
          Rhetoric 1.4.3 UCI_Elo 1500   5000   1875  1330  1795   360   448
               Hiarcs 14 UCI_Elo 1500   5000   1798  1112  2090   375   685
              Cheese 2.1 UCI_Elo 1500   5000   2138  1532  1330   421   165
              Amyan 1.72 UCI_Elo 1500   5000   1803  1186  2011   460   551

Code: Select all

::Legend::
Total: Number of test positions from human games.
Match: Count of pos, where engine and human move are the same.
High : Count of pos, where engine move is stronger than human move.
Low  : Count of pos, where engine move is weaker than human move.
HACD : High Average Centipawn Difference, engine move is stronger 
       than human move by Centipawn amount, according to Stockfish 2019.04.16.
LACD : Low Average Centipawn Difference, engine move is weaker 
       than human move by Centipawn amount, according to Stockfish 2019.04.16.
Ranking result using TOPSIS. Ranked 1 engine is the top engine that performed well in this test according to the given criteria and weights. This is Ufim, followed by ranked 2 DiscoCheck and ranked 3 Cheese.

Code: Select all

TOPSIS (mnorm=vector, wnorm=sum) - Solution:
             ALT./CRIT.                Match (max) W.0.6    High (min) W.0.05    Low (max) W.0.05    HACD (min) W.0.2    LACD (max) W.0.1    Rank
------------------------------------  -------------------  -------------------  ------------------  ------------------  ------------------  ------
Deuterium v2019.2.37.59 UCI_Elo 1500         1891                 1360                 1749                426                 284            9
      Ufim v8.02 UCI_Elo 1500                2164                 1360                 1476                447                 423            1
      CT800 V1.34 UCI_Elo 1500               1627                 1164                 2209                329                 914            7
      Arasan 21.3 UCI_Elo 1500               1634                 1178                 2188                491                 440            12
   DanaSah 7.9 Human UCI_Elo 1500            1710                 1195                 2095                434                 324            10
 Stockfish 2019.07.14 UCI_Elo 1500           1304                 1231                 2465                443                 390            13
      Cheng 4.39 UCI_Elo 1500                2141                 1527                 1332                427                 144            5
    Discocheck 5.2 UCI_Elo 1500              1947                 1308                 1745                380                 445            2
       Houdini 3 UCI_Elo 1500                1704                 1269                 2027                427                 174            11
    Rhetoric 1.4.3 UCI_Elo 1500              1875                 1330                 1795                360                 448            6
       Hiarcs 14 UCI_Elo 1500                1798                 1112                 2090                375                 685            4
      Cheese 2.1 UCI_Elo 1500                2138                 1532                 1330                421                 165            3
      Amyan 1.72 UCI_Elo 1500                1803                 1186                 2011                460                 551            8

Post Reply