Oups mistake let me patch that. Wont be longxr_a_y wrote: ↑Sat Jun 27, 2020 9:27 amYou can use Minic 2.38 unofficial release. All pieces can go from 0 to 2000.
Here : https://github.com/tryingsomestuff/Mini ... ter/Minic2
If that work, please send me your optimized setting !
Looking for uci engines with piece value options shown
Moderators: hgm, Rebel, chrisw
-
- Posts: 1871
- Joined: Sat Nov 25, 2017 2:28 pm
- Location: France
Re: Looking for uci engines with piece value options shown
-
- Posts: 1871
- Joined: Sat Nov 25, 2017 2:28 pm
- Location: France
Re: Looking for uci engines with piece value options shown
Done, you can now indeed take 2.38...xr_a_y wrote: ↑Sat Jun 27, 2020 11:02 amOups mistake let me patch that. Wont be longxr_a_y wrote: ↑Sat Jun 27, 2020 9:27 amYou can use Minic 2.38 unofficial release. All pieces can go from 0 to 2000.
Here : https://github.com/tryingsomestuff/Mini ... ter/Minic2
If that work, please send me your optimized setting !
-
- Posts: 2526
- Joined: Mon Feb 08, 2016 12:43 am
- Full name: Brendan J Norman
Re: Looking for uci engines with piece value options shown
I messed around a bit with Minic. I like it a lot.xr_a_y wrote: ↑Sat Jun 27, 2020 12:36 pmDone, you can now indeed take 2.38...xr_a_y wrote: ↑Sat Jun 27, 2020 11:02 amOups mistake let me patch that. Wont be longxr_a_y wrote: ↑Sat Jun 27, 2020 9:27 amYou can use Minic 2.38 unofficial release. All pieces can go from 0 to 2000.
Here : https://github.com/tryingsomestuff/Mini ... ter/Minic2
If that work, please send me your optimized setting !
The playing style and move/plan choices are unusual, but strong. Sort of like how Thinker used to be.
I wonder how many eval settings you could possibly open up (King Safety, Outposts, Pawn Structure etc) and if you could implement a NPS limit in UCI options for creating weaker personalities.
A lot of us would appreciate another interesting engine to spar against.
-
- Posts: 1871
- Joined: Sat Nov 25, 2017 2:28 pm
- Location: France
Re: Looking for uci engines with piece value options shown
I can open all search and eval settings if someone is interesting ... but that will be a lot of parameters. That's why I have chosen to expose only those "style parameters" in the 2.37 release (and pieces values for this experience in 2.38). Let me know if you wanna play with something. More or less everything here (https://github.com/tryingsomestuff/Mini ... Config.cpp) and there (https://github.com/tryingsomestuff/Mini ... Config.cpp) can become a GUI parameter ...BrendanJNorman wrote: ↑Sat Jun 27, 2020 6:02 pmI messed around a bit with Minic. I like it a lot.xr_a_y wrote: ↑Sat Jun 27, 2020 12:36 pmDone, you can now indeed take 2.38...xr_a_y wrote: ↑Sat Jun 27, 2020 11:02 amOups mistake let me patch that. Wont be longxr_a_y wrote: ↑Sat Jun 27, 2020 9:27 amYou can use Minic 2.38 unofficial release. All pieces can go from 0 to 2000.
Here : https://github.com/tryingsomestuff/Mini ... ter/Minic2
If that work, please send me your optimized setting !
The playing style and move/plan choices are unusual, but strong. Sort of like how Thinker used to be.
I wonder how many eval settings you could possibly open up (King Safety, Outposts, Pawn Structure etc) and if you could implement a NPS limit in UCI options for creating weaker personalities.
A lot of us would appreciate another interesting engine to spar against.
About weaker personality, there is already UCI_LimitStrength / UCI_Elo available or my own "Level" option. Those are using a limited depth search, disable some eval feature and add randomness in move choice.
-
- Posts: 4833
- Joined: Sun Aug 10, 2008 3:15 pm
- Location: Philippines
Re: Looking for uci engines with piece value options shown
Just a quick sample run. Unoptimize rook values of minic to 400 (optimimal is close to 500) and from there run the optimizer. I limit the param to +-150 from 400cp.xr_a_y wrote: ↑Sat Jun 27, 2020 12:36 pmDone, you can now indeed take 2.38...xr_a_y wrote: ↑Sat Jun 27, 2020 11:02 amOups mistake let me patch that. Wont be longxr_a_y wrote: ↑Sat Jun 27, 2020 9:27 amYou can use Minic 2.38 unofficial release. All pieces can go from 0 to 2000.
Here : https://github.com/tryingsomestuff/Mini ... ter/Minic2
If that work, please send me your optimized setting !
I stopped the optimization via the option (--stop-best-mean-goal -0.85) once the best score mean reaches 0.85 or more. In every iteration score is saved to best score history if it is good. The mean of best score is taken from the last 30 saved scores. score is just (win+draw/2)/games. Every match in this run is only 2 games.
The best score goes down as rook mg values decreases even if the eg value increases, but once the mg increases, the score increases. The match is only 2 games (can be increased but it will take more time) and we know that anything can happen in only 2 games, but since the mean score is based on the last 30 scores it looks as if we are running a 30 game match.
Command line:
Code: Select all
PS D:\python_spsa_tuning\spsa-master> python -u game_optimizer.py --iteration 2000 --stop-best-mean-goal -0.85 --stop-all-mean-goal -0.75 --stop-min-iter 50 | tee minic_rook_test.log
Code: Select all
Stop opimization due to good average best goal!
minimum = {'RookValueMG': {'value': 524, 'min': 250, 'max': 550, 'factor': 200}, 'RookValueEG': {'value': 436, 'min': 250, 'max': 550, 'factor': 200}}
Code: Select all
# optimizer_setting.yml
# Main section for the engine under tests
test_engine:
file: "./engines/minic/minic.exe"
name: "test" # Don't use name with space
proto: "uci"
option:
Hash: 64 # mb
# Subsection for parameters to be optimized
# The value is only the initial value that the optimizer can start to work on.
# On factor, actual value sent to optimizer is value/factor. If factor
# is low optimizer will only suggest smaller changes to initial value. If the
# parameter is not sensitive to changes, you can increase the factor.
parameter_to_optimize:
# QueenValueOp: {value: 985, min: 950, max: 1050, factor: 100}
# QueenValueEn: {value: 1022, min: 950, max: 1050, factor: 100}
RookValueMG: {value: 400, min: 250, max: 550, factor: 200}
RookValueEG: {value: 400, min: 250, max: 550, factor: 200}
# BishopValueOp: {value: 350, min: 300, max: 360, factor: 100}
# BishopValueEn: {value: 350, min: 300, max: 360, factor: 100}
# Main section for the engine with fix setting as opponent to test_engine
base_engine:
file: "./engines/minic/minic.exe"
name: "base" # Don't use name with space
proto: "uci"
option:
Hash: 64
RookValueMG: 400
RookValueEG: 400
# Main section for tournament manager
cutechess:
file: "./cutechess/cutechess-cli.exe"
option:
# Common option applied to both engines.
engine_option:
tc: "0/5+0.05"
cutechess_option:
# Total games per engine match are games x rounds.
# There are 2 matches that will done per iteration.
# These 2 matches will be done in parallel.
# Each match is done concurrently by concurrency value.
tournament: "round-robin"
concurrency: 2
games: 2
repeat: 2 # Start engine color is reversed.
rounds: 1
pgnout:
file: "output_game.pgn"
option: "fi"
openings:
file: "./startopening/2moves_v2.pgn"
format: "pgn" # epd
order: "random"
plies: 10000
start: 1
policy: "default"
adjudications:
resign:
movecount: 4
score: 400
twosided: true
draw:
movenumber: 40
movecount: 4
score: 5
-
- Posts: 27796
- Joined: Fri Mar 10, 2006 10:06 am
- Location: Amsterdam
- Full name: H G Muller
Re: Looking for uci engines with piece value options shown
I don't understand what you are doing. What scores are you averaging? Are you playing a variable version against the initial one all the time, to optimize its performance in such encounters? (This seems fundamentally wrong, because the initial one is flawed, and what you would learn is how to best exploit the flaws, which might actually deteriorate performance against opponents that do not have these flaws.)
How can the mean score start at 75% and rise to 95% while the Rook values is not significantly changing, and then drop again? What does 'best score mean' mean anyway? How can you have a mean of 30 scores before you have 30 iterations?
How can the mean score start at 75% and rise to 95% while the Rook values is not significantly changing, and then drop again? What does 'best score mean' mean anyway? How can you have a mean of 30 scores before you have 30 iterations?
-
- Posts: 4833
- Joined: Sun Aug 10, 2008 3:15 pm
- Location: Philippines
Re: Looking for uci engines with piece value options shown
The result of the engine vs engine match. test_engine vs base_engine, from test_engine perspective.I don't understand what you are doing. What scores are you averaging?
If test_engine scores 1.5 points in 2 games, test_engine score is 0.75. But it will be reported to the optimizer as -0.75.
Yes. The test_engine will handleAre you playing a variable version against the initial one all the time,
the variable parameters and the base_engine has the fixed parameter. In this test, base_engine has rook values set at 400cp.
The test_engine starts at 400cp too, but will take the parameters suggested by optimizer as iteration increases.
That is from the result of 2 a game match. 1.5/2 or 0.75. It is in the beginning of the iteration so there is nothing to average.How can the mean score start at 75% and rise to 95% while the Rook values is not significantly changing, and then drop again?
Initially their parameters are close at 400cp, so anything can happen in that 2 game match. That number of games
in a match is settable, and in this particular test, I use 2 games, I am curious to see what this optimizer will be doing if the
cost function or engine vs engine match is calculated from a small number of games.
Initially the current_score is set to 1.0 or 100%. That current_score is bad for the optimizer.What does 'best score mean' mean anyway?
https://github.com/fsmosca/spsa/blob/62 ... psa.py#L27
https://github.com/fsmosca/spsa/blob/62 ... sa.py#L304
The optimizer will try to reduce it by changing the parameters of the test_engine, play against the base_engine
with fixed parameters and get the score and then negate it. If test_engine scores 1.5/2 or 0.75, it will be sent to the
optimizer as -0.75. Since -0.75 is better than the current_score which is 1.0 (according to optimizer as it is a minimizer),
it will be saved in the best_score history. As iteration increases, the entries in the best_score history also increases.
If there are only 10 scores there, we will get the mean from that 10 scores.
There is also another score history tracking, we call it all_score history because it will save whatever score there is in every iteration.
So we have best_score and all_score histories. At the beginning the current_score was 1.0, once there are score histories, its value will be
the mean of the all_score history.
current_score = mean(all_score) from the last 30 scores or available number of scores in the all_score history.
https://github.com/fsmosca/spsa/blob/62 ... sa.py#L301
The best_score history will be updated if the -(engine_vs_engine_match_score) <= current_score.
https://github.com/fsmosca/spsa/blob/62 ... sa.py#L456
-
- Posts: 1871
- Joined: Sat Nov 25, 2017 2:28 pm
- Location: France
Re: Looking for uci engines with piece value options shown
When I try this with SPSA or PSO or try to optimize any search param I am using 300 games of 1 or 2 sec. This is already a very very noisy result. I really doubt you can achieve something with only 2 games.
Using simply PSO I am able to get not crazy values with some hundred of such iteration but the process is really to slow ...
Using simply PSO I am able to get not crazy values with some hundred of such iteration but the process is really to slow ...
-
- Posts: 1871
- Joined: Sat Nov 25, 2017 2:28 pm
- Location: France
Re: Looking for uci engines with piece value options shown
I'm sorry, I had to repatch this 2.38 again, another bug was inside. Please update if you wanna go on with minic test.
-
- Posts: 27796
- Joined: Fri Mar 10, 2006 10:06 am
- Location: Amsterdam
- Full name: H G Muller
Re: Looking for uci engines with piece value options shown
I am still not completely with you. Apparently the 'best score mean' is the average of a subset of the match scores. What is the criterion to qualify for that? You only take the scores of matches that are won? That cannot be, because in a 2-game match that must be at least 75%, and after you get many games the average drops below that. You take all 1-1, 1.5-0.5 and 2-0 results? Then it must have initially be very lucky, to get to 95% when the results are basically random, and you expect about 50% 1-1, 33% 1.5-0.5 and 17% 2-0. (That makes an average best score of 66%, which is indeed what you converge to after 40 games or so, where its starts to forget its very lucky beginning.)
This looks an awful lot like a random walk to me. That you stopped at the point where you happened to like the result. To draw any conclusions at all you would have to run at least 10 times as many iterations, to see if the result stays near 500, or just fluctuates back to 400 (like the end-game value does).
With so few games per match I would not expect much better than a random walk, as, if I understood the method correctly, the parameters it will be trying next are based on just the last two matches, which will tend to have a widely different results, almost purely statistical noise.
This looks an awful lot like a random walk to me. That you stopped at the point where you happened to like the result. To draw any conclusions at all you would have to run at least 10 times as many iterations, to see if the result stays near 500, or just fluctuates back to 400 (like the end-game value does).
With so few games per match I would not expect much better than a random walk, as, if I understood the method correctly, the parameters it will be trying next are based on just the last two matches, which will tend to have a widely different results, almost purely statistical noise.