Nw release: rofChade 2.2

Discussion of anything and everything relating to chess playing software and machines.

Moderators: hgm, Rebel, chrisw

User avatar
Ronald
Posts: 160
Joined: Tue Jan 23, 2018 10:18 am
Location: Rotterdam
Full name: Ronald Friederich

Nw release: rofChade 2.2

Post by Ronald »

Hi,

After a longer period of little activity I got a boost to develop a new rofChade version after trying a retune with the "lichess-good" set as discussed in the topic about Texel tuning viewtopic.php?f=7&t=71469#p807692, I got an elo increase of around 30 elo! So thanks Fabian, Jon and Vivien for discussing the topic and making the position set available!

Version 2.2 also contains multiple changes which resulted in an estimated elo gain of 50/70 elo single threaded. Because of some changes to hash table and multi threading the elo gain with multiple threads is hopefully a bit higher.

rofChade 2.2 can be downloaded from the website : http://rofchade.nl under the tab "Download". Under the tab "Releases" the version changes are defined.

Thanks to everybody who takes interest in rofChade!
Dann Corbit
Posts: 12540
Joined: Wed Mar 08, 2006 8:57 pm
Location: Redmond, WA USA

Re: Nw release: rofChade 2.2

Post by Dann Corbit »

I like the logo on your site, with the two gargoyle looking guys pondering their next move.
Taking ideas is not a vice, it is a virtue. We have another word for this. It is called learning.
But sharing ideas is an even greater virtue. We have another word for this. It is called teaching.
Damir
Posts: 2801
Joined: Mon Feb 11, 2008 3:53 pm
Location: Denmark
Full name: Damir Desevac

Re: Nw release: rofChade 2.2

Post by Damir »

Many thanks for new Rofchade Ronald. :) :) :D
User avatar
Graham Banks
Posts: 41423
Joined: Sun Feb 26, 2006 10:52 am
Location: Auckland, NZ

Re: Nw release: rofChade 2.2

Post by Graham Banks »

Thanks Ronald. :)
gbanksnz at gmail.com
fabianVDW
Posts: 146
Joined: Fri Mar 15, 2019 8:46 pm
Location: Germany
Full name: Fabian von der Warth

Re: Nw release: rofChade 2.2

Post by fabianVDW »

Ronald wrote: Fri Sep 06, 2019 6:15 pm Hi,

After a longer period of little activity I got a boost to develop a new rofChade version after trying a retune with the "lichess-good" set as discussed in the topic about Texel tuning viewtopic.php?f=7&t=71469#p807692, I got an elo increase of around 30 elo! So thanks Fabian, Jon and Vivien for discussing the topic and making the position set available!

Version 2.2 also contains multiple changes which resulted in an estimated elo gain of 50/70 elo single threaded. Because of some changes to hash table and multi threading the elo gain with multiple threads is hopefully a bit higher.

rofChade 2.2 can be downloaded from the website : http://rofchade.nl under the tab "Download". Under the tab "Releases" the version changes are defined.

Thanks to everybody who takes interest in rofChade!
Hi Ronald, congrats on your progress! You seem to have gotten a similar gain as I did with FabChess after retuning. Have you also been using Zurichess dataset before that?

If you used my dataset( which was just the conversion of all the positions jdart provided to quiet positions), there are perhaps ways to even improve on the quality of that:
1. Use a better engine for replacement of every position through their q search leaf. As FabChess is a rather bad engine comparably, a better engine should provide a more accurate quiet position for every position given
2. I am not sure whether this is actually heplful or not, but when the qsearch principal variation follows the game, you get several copies of a position labelled the same. Perhaps sorting those duplicates out could help.

Greetings, Fabi
Author of FabChess: https://github.com/fabianvdW/FabChess
A UCI compliant chess engine written in Rust.
FabChessWiki: https://github.com/fabianvdW/FabChess/wiki
fabianvonderwarth@gmail.com
User avatar
xr_a_y
Posts: 1871
Joined: Sat Nov 25, 2017 2:28 pm
Location: France

Re: Nw release: rofChade 2.2

Post by xr_a_y »

Ronald, are you tuning things step by step or everything in one big run ?
How much iterations (of which algorithm?) does it takes usually ?
User avatar
Ronald
Posts: 160
Joined: Tue Jan 23, 2018 10:18 am
Location: Rotterdam
Full name: Ronald Friederich

Re: Nw release: rofChade 2.2

Post by Ronald »

fabianVDW wrote: Sat Sep 07, 2019 12:19 pm
Hi Ronald, congrats on your progress! You seem to have gotten a similar gain as I did with FabChess after retuning. Have you also been using Zurichess dataset before that?

If you used my dataset( which was just the conversion of all the positions jdart provided to quiet positions), there are perhaps ways to even improve on the quality of that:
1. Use a better engine for replacement of every position through their q search leaf. As FabChess is a rather bad engine comparably, a better engine should provide a more accurate quiet position for every position given
2. I am not sure whether this is actually heplful or not, but when the qsearch principal variation follows the game, you get several copies of a position labelled the same. Perhaps sorting those duplicates out could help.

Greetings, Fabi
It's funny our gains are similar, I also used the Zurichess dataset for tuning before..

1. I don't think you need the "best" qsearch leaf position, as long as the leaf position doesn't change the outcome of the original position, which every regular qsearch will do
2. In theory using a duplicate position has some effect, the evaluation error will be added to the total error multiple times, so the tuning proces will tune the parameters I little bit more to the duplicate position. In practice it will probably have no effect.

Because of the large elo increase I'd like to experiment some more with the tuning proces and the datasets. For instance, are all the parameters covered by the dataset and how many times. For a knight Piece Square Table: how many positions are in the dataset with a knight on H8 etc.
User avatar
Ronald
Posts: 160
Joined: Tue Jan 23, 2018 10:18 am
Location: Rotterdam
Full name: Ronald Friederich

Re: Nw release: rofChade 2.2

Post by Ronald »

xr_a_y wrote: Sat Sep 07, 2019 6:13 pm Ronald, are you tuning things step by step or everything in one big run ?
How much iterations (of which algorithm?) does it takes usually ?
I tune all the parameters in one run. When I tune only a few parameters (for instance when adding a new element in the evaluation) I always get worse results than when tuning all the parameters.
At the moment I'm still using a straightforward incr/decr of all the parameters, I'm not using a gradient yet. I'm also tuning the whole dataset in each iteration, so it takes some time, even with my 16 core Threadripper.

With the Zurichess dataset it usually took 110/120 iterations to complete. With the lichess set, the results of the 40th iteration gave the best results. More iterations resulted in worse play....

Because of the large gain I want to spend more time on the tuning proces and datasets.