Lyudmil Tsvetkov wrote:Was not the claim no one ever touches the code in Alpha, the only thing it knows is how to make legal moves?
The claim was no one touched it so far. BUT:
1) Re-trying the same thing on a more powerful NN still does't touch the 'code' (i.e. the learned info).
2) They don't have to keep doing that.
CheckersGuy wrote:NN obviously means neural network and DeepMind knows all about it. Or what do you think DeepMind is doing with their +50 phd's ?
They have no clue.
You can not compare Shannon with David Silver.
Shannon is a true scientist, and this is obvious even only looking at his picture.
Silver is just a random PhD-holder, a title especially depreciated nowadays.
There are just very few true scientists in the world.
PhD-holders are by the millions, shovelfuls of them.
So, I can never expect a PhD-holder to quite achieve the same thing as a true dedicated scientist.
That is why I can't endorse in any imaginable way the Alpha project: if anything, it is prejudicial to science.
CheckersGuy wrote:Yeah 100 elo above stockfish isn't like what they have accomplished with Go. However, I wonder how strong alphaZero could get given more training and optimizations specific to chess. Since AlphaZero was, basically, for any board game I think there might be a lot of improvements.
Let's bet the improvements will be 0.0 in 5 years' time.
This is their peak.
You can not achieve an engine much stronger than 2800 with such algorithms.
Where did you get that figure from ? Any backup ?
Because Alpha basically plays very primitive chess, its only strength is outcalculation, and especially outbooking.
That is why it picks lines where it can outcalculate the opponent, and not knowledge-based ones.
One thing is certain: you can never achieve perfect play by just tuning patterns, unless you tune those strictly exclusively, and they certainly did not do that.
That is why SF currently is at a down: its patterns are over-tuned without exclusivity applied to them.
So if SF doesn't play "primitive " chess. Why did it lose ? If "primitive" chess is that much better i might give it a try. Jokes aside. A neural networks is not an opening book so "outbooking" doesn't really mean anything in this context. Furtheremore, what do you mean by outcalculation ? AlphaZero did look at far fewer nodes than StockFish and still won the game. The strength of AlphaZero is in the evaluation and selectivity and not brute force (ala stockfish,houdini...)
Lyudmil Tsvetkov wrote:One thing is certain: you can never achieve perfect play by just tuning patterns, unless you tune those strictly exclusively, and they certainly did not do that.
Have you not yet figured out that AlphaZero finds the patterns that matter? It finds them and then tunes them.
Stockfish only knows the patterns that programmers teach it.
AlphaZero discovers the patterns that matter itself.
Should AlphaZero, by your own reasoning, not be far superior to Stockfish?
Lyudmil Tsvetkov wrote:Was not the claim no one ever touches the code in Alpha, the only thing it knows is how to make legal moves?
The claim was no one touched it so far. BUT:
1) Re-trying the same thing on a more powerful NN still does't touch the 'code' (i.e. the learned info).
2) They don't have to keep doing that.
You got me totally confused, what the hell is the NN, is it a code, a machine, a combination of patterns or a self-learning oddity?
So, according to you, there is no code at all involved in Alpha?
If they touch it, then the Magic is gone, no self-learning any more.
If they just increase the hardware, that would probably not be the new NN.
Or, do you want to say that the hardware is the NN?
CheckersGuy wrote:Yeah 100 elo above stockfish isn't like what they have accomplished with Go. However, I wonder how strong alphaZero could get given more training and optimizations specific to chess. Since AlphaZero was, basically, for any board game I think there might be a lot of improvements.
Let's bet the improvements will be 0.0 in 5 years' time.
This is their peak.
You can not achieve an engine much stronger than 2800 with such algorithms.
Where did you get that figure from ? Any backup ?
Because Alpha basically plays very primitive chess, its only strength is outcalculation, and especially outbooking.
That is why it picks lines where it can outcalculate the opponent, and not knowledge-based ones.
One thing is certain: you can never achieve perfect play by just tuning patterns, unless you tune those strictly exclusively, and they certainly did not do that.
That is why SF currently is at a down: its patterns are over-tuned without exclusivity applied to them.
So if SF doesn't play "primitive " chess. Why did it lose ? If "primitive" chess is that much better i might give it a try. Jokes aside. A neural networks is not an opening book so "outbooking" doesn't really mean anything in this context. Furtheremore, what do you mean by outcalculation ? AlphaZero did look at far fewer nodes than StockFish and still won the game. The strength of AlphaZero is in the evaluation and selectivity and not brute force (ala stockfish,houdini...)
It looked at far fewer nodes, but it had been MC-playing zillions of probabilistic games on each and every node: somehow those 2 statements fully contradict each other.
I am not that stupid, it is obvious they are lying, or at least don't know what they are talking about.
Lyudmil Tsvetkov wrote:One thing is certain: you can never achieve perfect play by just tuning patterns, unless you tune those strictly exclusively, and they certainly did not do that.
Have you not yet figured out that AlphaZero finds the patterns that matter? It finds them and then tunes them.
Stockfish only knows the patterns that programmers teach it.
AlphaZero discovers the patterns that matter itself.
Should AlphaZero, by your own reasoning, not be far superior to Stockfish?
So, you, similarly to Harm, suggest Alpha has no programming code apart from the rules of chess?
Want to ask them to confirm that?
Whether the patterns are suggested by humans or found by the computer based on the guidelines of the primary code makes no difference at all what concerns exclusivity of evaluation features.
You can not achieve much bigger progress, unless your eval terms are fully exclusive to each other,. i.e. non-redundant. Otherwise, you will be picking the wrong positions/best moves all the time.
SF and the rest of the top engine still manage to add a few elo by just tuning blindly, but their luck will run dry at some point due to the higher necessities of perfection.