Jakob Progsch wrote: ↑Fri Jul 16, 2021 8:57 pm
Why is everyone writing their own tuner anyway? Just export the positions/features and scores and throw them at tensorflow or so. After all a PSQT is just a tiny single layer network and using a well optimized framework will have those converge within minutes instead of the hours people often quote for their DIY tuners?
Using these python based frameworks also makes for very convenient experimentation when adding more terms etc.
Why is everyone writing their own engines anyway?
I wanted to understand how it works before handing it off to a blackbox (for me) like tensorflow or Matlab/Octave. Also I experimented with the implementation details and that seemed to make a difference in the quality of th PSTs. For example when calculating the error I suppressed the
influence of positions that were not clearly fitting the endgame and midgame table. But you're right. The
not invented here syndrome comes to mind, of course.
algerbrex wrote: ↑Fri Jul 16, 2021 3:14 am
What kind of ELO gain should I be expecting with killer moves added? (I know every engine is different, I'm just more so asking generally). And depending on the minuteness of the ELO gain, how many games would I need to run to see a result?
For me the main gain was that I'm using staged move generation and having a set of 4 killer moves I could try would often save me from generating all the non-captures which saved some time.
abulmo2 wrote: ↑Sat Jul 17, 2021 12:43 am
When I remove a feature from Dumb, I got the following Elo differences :
Code: Select all
# PLAYER : RATING ERROR POINTS PLAYED (%)
1 dumb-1.9-dev : 0.0 4.8 6970.0 11000 63.4%
2 dumb-1.9-dev-no_razoring : -1.5 4.7 6947.5 11000 63.2%
3 dumb-1.9-dev-no_fnco : -7.7 5.0 6853.0 11000 62.3%
4 dumb-1.9-dev-no_killer : -10.3 4.5 6814.5 11000 62.0%
5 dumb-1.9-dev-no_lmp : -43.9 4.7 6295.0 11000 57.2%
6 dumb-1.9-dev-no_see : -51.3 4.6 6179.5 11000 56.2%
7 dumb-1.9-dev-no_aspiration : -69.9 4.9 5885.5 11000 53.5%
8 dumb-1.9-dev-no_nullmove : -84.5 4.6 5655.5 11000 51.4%
9 dumb-1.9-dev-no_history : -164.0 4.7 4413.5 11000 40.1%
10 dumb-1.9-dev-no_lmr : -174.5 4.8 4254.5 11000 38.7%
11 dumb-1.9-dev-hash_bmo : -210.0 4.9 3732.5 11000 33.9%
12 dumb-1.9-dev-no_hash : -348.7 5.8 1999.0 11000 18.2%
* fnco = frontier node cut off, bmo = best move only, hash = transposition table, see = Static Exchange Evaluation, lmp = late move pruning, lmr = late move reduction.
That's interesting. LMR and History both seem like a huge features in that list. But is the fact that removing one of the them (either LMR or History) both causes your engine to lose 160+ ELO maybe because they depend on each other and you break some kind of synergy between them? Or are each worth 160+ ELO in isolation?
Because when I tried history moves (never tried LMR) in isolation it didn't help me much and I removed it again because it didn't seem worth the added complexity in engine that aims to stay simple. But when you want to reduce some "late" moves then doing that based on their history value is probably a good idea. Do you do it like that?