hgm wrote:Milos wrote:No, the point is, they are not actually capable of beating SF in fair and optimal conditions for SF,
'Fair and optimal conditions' meaning that SF should not have been forced to play the moves by itself, but some other entity (namely a book) should have been allowed to play the moves instead...
which would mean they wouldn't have generated nearly as much publicity as they did in case of dominant victory, therefore they used totally immoral approach of crippling SF in any way possible that is not immediately obvious and using absolutely unfair comparison to obtain that marketing goal.
Yeah, sure. It is very crippling when you have to find your own moves, just as the opponent does. Or when you have to play at fixed time per move, just as your opponent does. In fact anything that doesn't rig the odds massively in your favor would be highly unfair. After all, Stockfish is the TCEC champion. How dare they subject it to the same conditions as the opponent!
Bias (and I'm sure not because of lack of knowledge) is to let a learning machine let play 100 games with fixed 1'/move against an engine programmed for using the TCs on its own time management and programmed for using books to play reasonable chess as for what chess players would call reasonable.
At least all the rating lists compare Celo always using books for the matches or selected opening position sets , don't they? So where come the Elo given after this "match" from, if SF without book isn't rated at all neither?)
Main bias is to rate the "tested" learning machine against maybe only 3 or 4 at all different opening lines in all 100 games (who shows there were any more at all than in the 10 most "beautiful" ones shown?) and come to the conclusion, A0 would have reinvented opening theory already yet on its own?
To make believe that, showing some graphs about probality of normally in theory played opening lines in selfplay of A0 only?
There's the plan to hope for misunderstanding of the reader with this kind of "results" in a "paper".
A0 in the ten shown games had its good performance having learned to beat SF in a few opening lines, repeated 100 times.
Period.
If it wouldn't have managed to learn playing against SF 1'/move fixed 100 times repeating these few lines, what kind of a learning machine would be that?
Any hash- and or book- learning SF would have performed quite well too against a bookless SF repeating these few lines on and on not being able to learn at all.
So what
Peter.