carldaman wrote: ↑Sun Aug 09, 2020 7:12 pm
In principle, I can't see why a net can't be trained on the evaluation of a very optimistic and aggressive engine, such as CyberNezh or OpenTal, and then the resulting net would hopefully reflect that style. In practice, things could turn out differently, though.
I hope to eventually learn how to train a net, so I can train one using Nezh, as an experiment.
Nezh is an engine that goes out to play for a win in every game, taking the necessary risks and then some (see link below, as it's only available on lichess).
It heavily depends whether you train with a lambda of 1 or less. Lambda 1 means it only tries to predict the evaluation of the position. The closer you set it to zero the more it'll try to learn from the game result (of course some randomness in the data is needed like "temp" for Leela). Getting started with training a net for NNUE is much easier and quicker than training one for Leela. The "Stockfish-Discord" has a lot of good resources on how to train and a dedicated help channel.
Thanks, it's quite useful to know about lambda. It should mean that one could train primarily on style, with less regard to the actual outcome (whether the style actually wins most games or not).
I do have access to discord, but I'm a little too busy for now. I'm eager to learn more about training once I get around to it.
carldaman wrote: ↑Sun Aug 09, 2020 7:41 pm
Thanks, it's quite useful to know about lambda. It should mean that one could train primarily on style, with less regard to the actual outcome (whether the style actually wins most games or not).
Oh! I just thought about how to do this:
Lie.
Lie to the NN and just tell it the moves played by the player you want to mimic have the best score and all of them won, so the style will approach it because it thinks it wins. We can finally teach engines to play losing moves, we just tell it they win on the training data.
Set Lambda to 0 and tell it all those nonsense moves are winning.
carldaman wrote: ↑Sun Aug 09, 2020 7:41 pm
Thanks, it's quite useful to know about lambda. It should mean that one could train primarily on style, with less regard to the actual outcome (whether the style actually wins most games or not).
Oh! I just thought about how to do this:
Lie.
Lie to the NN and just tell it the moves played by the player you want to mimic have the best score and all of them won, so the style will approach it because it thinks it wins. We can finally teach engines to play losing moves, we just tell it they win on the training data.
Set Lambda to 0 and tell it all those nonsense moves are winning.
Except that it wouldn't (or shouldn't) be all nonsense moves, but just a bunch of risky, speculative, attack-minded moves - some good, some bad and many in-between.
carldaman wrote: ↑Sun Aug 09, 2020 7:41 pm
Thanks, it's quite useful to know about lambda. It should mean that one could train primarily on style, with less regard to the actual outcome (whether the style actually wins most games or not).
Oh! I just thought about how to do this:
Lie.
Lie to the NN and just tell it the moves played by the player you want to mimic have the best score and all of them won, so the style will approach it because it thinks it wins. We can finally teach engines to play losing moves, we just tell it they win on the training data.
Set Lambda to 0 and tell it all those nonsense moves are winning.
Except that it wouldn't (or shouldn't) be all nonsense moves, but just a bunch of risky, speculative, attack-minded moves - some good, some bad and many in-between.
Just add some randomization to the eval, it will be heck of lot easier than training it to play falsely ...
MikeB wrote: ↑Sun Sep 06, 2020 5:49 am
Just add some randomization to the eval, it will be heck of lot easier than training it to play falsely ...
The point is to make it imitate someone's play. Like Rodent Karpov's personality, that does a great job playing like Karpov.
Or, anybody, for that matter, without all the trouble of parameter tweaking to achieve it, you just feed the NN the moves it must play so it learns the style. Note the GPT-3 NN already does it for writing style, and as it was trained over the whole Internet, you could probably ask it to write something in the style of Mike Byrne and she'd mimic your writing style (...you'd just need to specify it's from Talkchess.com.)
How can we live in a world where an AI can write immersive fiction in the style of any writer but it can't play chess in someone else's style? Technology went wrong somewhere...
MikeB wrote: ↑Sun Sep 06, 2020 5:49 am
Just add some randomization to the eval, it will be heck of lot easier than training it to play falsely ...
The point is to make it imitate someone's play. Like Rodent Karpov's personality, that does a great job playing like Karpov.
Or, anybody, for that matter, without all the trouble of parameter tweaking to achieve it, you just feed the NN the moves it must play so it learns the style. Note the GPT-3 NN already does it for writing style, and as it was trained over the whole Internet, you could probably ask it to write something in the style of Mike Byrne and she'd mimic your writing style (...you'd just need to specify it's from Talkchess.com.)
How can we live in a world where an AI can write immersive fiction in the style of any writer but it can't play chess in someone else's style? Technology went wrong somewhere...
Kind of like ShasChess....but regardless of the initial eval of any given position...playing according to 'generalized' characteristics of a given famous player, but in all positions instead?
carldaman wrote: ↑Sun Aug 09, 2020 7:41 pm
Thanks, it's quite useful to know about lambda. It should mean that one could train primarily on style, with less regard to the actual outcome (whether the style actually wins most games or not).
Oh! I just thought about how to do this:
Lie.
Lie to the NN and just tell it the moves played by the player you want to mimic have the best score and all of them won, so the style will approach it because it thinks it wins. We can finally teach engines to play losing moves, we just tell it they win on the training data.
Set Lambda to 0 and tell it all those nonsense moves are winning.
Except that it wouldn't (or shouldn't) be all nonsense moves, but just a bunch of risky, speculative, attack-minded moves - some good, some bad and many in-between.
Just add some randomization to the eval, it will be heck of lot easier than training it to play falsely ...
I don’t think that will work. NN training ought not to be able to make sense of random data, or added noise.
MikeB wrote: ↑Sun Sep 06, 2020 5:49 am
Just add some randomization to the eval, it will be heck of lot easier than training it to play falsely ...
The point is to make it imitate someone's play. Like Rodent Karpov's personality, that does a great job playing like Karpov.
Or, anybody, for that matter, without all the trouble of parameter tweaking to achieve it, you just feed the NN the moves it must play so it learns the style. Note the GPT-3 NN already does it for writing style, and as it was trained over the whole Internet, you could probably ask it to write something in the style of Mike Byrne and she'd mimic your writing style (...you'd just need to specify it's from Talkchess.com.)
How can we live in a world where an AI can write immersive fiction in the style of any writer but it can't play chess in someone else's style? Technology went wrong somewhere...
There aren’t anything like enough examples of anyone’s style.
carldaman wrote: ↑Sun Aug 09, 2020 7:41 pm
Thanks, it's quite useful to know about lambda. It should mean that one could train primarily on style, with less regard to the actual outcome (whether the style actually wins most games or not).
Oh! I just thought about how to do this:
Lie.
Lie to the NN and just tell it the moves played by the player you want to mimic have the best score and all of them won, so the style will approach it because it thinks it wins. We can finally teach engines to play losing moves, we just tell it they win on the training data.
Set Lambda to 0 and tell it all those nonsense moves are winning.
Except that it wouldn't (or shouldn't) be all nonsense moves, but just a bunch of risky, speculative, attack-minded moves - some good, some bad and many in-between.
Just add some randomization to the eval, it will be heck of lot easier than training it to play falsely ...
I don’t think that will work. NN training ought not to be able to make sense of random data, or added noise.
It would not work well - I agree . Seems like a useless exercise anyway - since the point of using NN is to ge to the a more accurate version of the truth and what they want, is an engine that plays a certain type of fiction with style. Good luck with that... - they can let me know how that works out for them ...
MikeB wrote: ↑Sun Sep 06, 2020 7:00 pm
It would not work well - I agree . Seems like a useless exercise anyway - since the point of using NN is to ge to the a more accurate of the truth and what they want, is an engine that plays a certain type of fiction with style. Good luck with that... - they can let me know how that out works for them ...
So, you would seem to think that there is truly no 'style' in chess?
One of the more fascinating things about (human) chess is that there is often no single 'correct' road that leads to Rome.
A Tal can approach a specific position on say move 15 - take it in a different direction than a Capablanca or Karpov would and both could get an edge they can nurse home according to their individual strengths.