So you also think the developers of LC0 are not able to make reverse engineering on a Google product to gain information for the developing of LC0.Sesse wrote: ↑Sat Dec 08, 2018 9:43 pmUnless Google's production systems have changed a lot since I worked there a few years ago, it's highly unlikely you'll be able to take production binaries built inside Google and get them to run on a regular install. There's just so much from the production environment (Borg, Chubby, etc.) that you don't have on the outside. Similarly, there would be so many Google-specific libraries linked in there that giving out binaries would just be out of the question from a confidentiality perspective.
Google open-sources a lot of stuff, generally by untangling it from Google3 and packaging it up into something more standard for the outside. You'll have to hope for that.
Alphazero news
Moderator: Ras
-
- Posts: 3657
- Joined: Wed Nov 18, 2015 11:41 am
- Location: hungary
Re: Alphazero news
-
- Posts: 300
- Joined: Mon Apr 30, 2018 11:51 pm
-
- Posts: 300
- Joined: Mon Apr 30, 2018 11:51 pm
Re: Alphazero news
I didn't say Google was devoted to open source. I said they open sourced a lot of stuff.
I don't actually know whether the internal TensorFlow is compatible with open-source TF these days. (It probably is, though.) The trained network would be useful, of course, but it's far from the entire story, and just a one-off binary dump wouldn't be too useful in the long run.
Accusing someone of lying in a research publication is a pretty strong claim. I have to wonder, do you take the same stance towards other private engines? I'm not entirely sure why there needs to be so much hostility.But of course they are never gonna do it, because one could actually run those NNs and realize they are not nearly as strong as they suggest in the publication...
-
- Posts: 1479
- Joined: Mon Apr 23, 2018 7:54 am
Re: Alphazero news
But isn't one of the main points of a TF SavedModel that it is compatible and portable etc.? You can just give it to your friends to use.
-
- Posts: 4190
- Joined: Wed Nov 25, 2009 1:47 am
Re: Alphazero news
They open source only for PR purposes. They are far less sincere about open source than Microsoft which is in itself really ironic.
Why would something so basic as neural network architecture model not be compatible between open source and internal TF????I don't actually know whether the internal TensorFlow is compatible with open-source TF these days. (It probably is, though.) The trained network would be useful, of course, but it's far from the entire story, and just a one-off binary dump wouldn't be too useful in the long run.
If that was really the case, that would be one more hell of an argument that anything that Google open sourced was to gain PR or increase their revenue and that it has nothing to do with true spirit of open source.
NN is everything. The rest could be written in couple of days in TF with what was already available in that preprint and AG0 paper. NPS don't mean a thing for actual verification of performance.
No other private engine does run PR campaign for selling of their online services (like cloud TPUs). Now that Google actually realized that performance of their could TPUs (v2) is on par with RTX 2070 (yes 2070 not 2080ti) that costs 500$ and that Lc0 is almost on par with A0 (or maybe even stronger, we will never know coz NNs are private) they finally decided to really publish something and give us a little bit more insight and ppl instantly feel like everyone should be enormously grateful to them.Accusing someone of lying in a research publication is a pretty strong claim. I have to wonder, do you take the same stance towards other private engines? I'm not entirely sure why there needs to be so much hostility.But of course they are never gonna do it, because one could actually run those NNs and realize they are not nearly as strong as they suggest in the publication...
You are not working in Google PR department, why so much need to defend them? Are they some kind of holly cow, so writing anything bad about them is forbidden?
And no I don't hold the same stance towards other private engines, coz none of them is created by a giant mean corporation, plus their performance is easily verifiable and in most cases already very well established.
-
- Posts: 1262
- Joined: Sat Jul 05, 2014 7:54 am
- Location: Southwest USA
Re: Alphazero news
clumma
Posts: 160
Joined: Fri Oct 10, 2014 8:05 pm
Location: Berkeley, CA
Re: Alphazero news
Post by clumma » Fri Dec 07, 2018 3:18 am
matthewlai wrote: ↑
Fri Dec 07, 2018 2:45 am
clumma wrote: ↑
Fri Dec 07, 2018 1:26 am
Can you help me locate the games AZ played against Brainfish? They don't seem to have their own file, and I don't see any identifying info in alphazero_vs_stockfish_all.pgn
Only games from the primary evaluation and TCEC openings have been released (no opening books).
D'oh!
Why?
I've been wanting to see AZ v BF since last year and the first thing I checked with this paper is whether you tried it and 99% of my excitement about it is that you did.
Also the results look really weird. White wins went down but black wins went up??
-Carl
Top
Geonerd
Posts: 44
Joined: Fri Mar 10, 2017 12:44 am
Re: Alphazero news
Post by Geonerd » Fri Dec 07, 2018 3:51 am
IanO wrote: ↑
Fri Dec 07, 2018 1:19 am
Even more exciting: they released the full game scores of the hundred game matches for all three games, chess, shogi, and go!
https://deepmind.com/research/alphago/a ... resources/
Thank you for the link!
Well....Well A very very Interesting Development....Following up on last weeks Alphazero revival (see youtube) during the controversial Worlds Championship match between Caruana and Carlsen...now about one year to the day after the Deepmind Group claims of "Destruction Of SF 8" (with no opening book) Additional games and claims have emerged...Loads of new information to review and digest...(This may take some time) I found interesting in the data and article the they used Knightcap...Meep... and Giraffe as the learning examples for Alpha Zero (No one ever seems to mention the amazing and suprising finds (NeuroGrapeFruit or NeuroStockfish) Meanwhile Leela0 and SF 10 have new versions out....This debate continues with no end in sight...


-
- Posts: 3657
- Joined: Wed Nov 18, 2015 11:41 am
- Location: hungary
Re: Alphazero news
For reverse engineering it need to make working the studied object.
As you stated without systems of Google they do not work.
-
- Posts: 12038
- Joined: Mon Jul 07, 2008 10:50 pm
-
- Posts: 12038
- Joined: Mon Jul 07, 2008 10:50 pm
Re: Alphazero news
Congrats on all that you and Google are doing for computer chess. Looking forward to your future achievements.matthewlai wrote: ↑Sat Dec 08, 2018 4:17 amThanks!glennsamuel32 wrote: ↑Sat Dec 08, 2018 3:21 am Hello Matthew, nice to see you back after so long !!
Does this mean Giraffe will get some updates in the future ?![]()
Afraid not! AlphaZero is taking up all my time these days, and it's a very exciting project with lots of uncharted territory ahead. AlphaZero is basically what I ever wanted Giraffe to become... and then a lot more. I have never been this excited about computer chess my whole life.
-
- Posts: 3026
- Joined: Wed Mar 08, 2006 9:57 pm
- Location: Rio de Janeiro, Brazil
Re: Alphazero news
I think it goes deeper than that. If someone, even someone whose knowledge and technical savvy you deeply respected, had told you a few years ago that they could get a program that was nearly one thousand times slower in NPS to compete and even beat the best of the day on a PC, I am guessing you would have rolled your eyes a them. I know I would have.lkaufman wrote: ↑Fri Dec 07, 2018 7:42 amThat's plus 32 elo. If the opponent was actually SF9 (does anyone know?) that would be about what I'd expect from SF10 under roughly TCEC conditions. So perhaps they are about equal given $20,000 or so hardware for each. So it's not yet clear that NN plus MCTS has surpassed normal alpha-beta in chess. What they have demonstrated is a good way to utilize the GPU for chess, as Lc0 is also doing. Perhaps there are better ways yet to be found.
Yet, that is what AlphaZero has done, and we are even able to bring this to the home user's PC thanks to Deep Mind's generosity with their knowledge, as well as the fantatsic Leela Chess community efforts. In other words that is not limited to some absurdly exotic hardware no one could ever hope to obtain. I am not even commenting on the whole self-learning process, which is what has been the focus.
What is more, to achieve this, you are looking at an incredibly evolved eval function (not precisely, but it helps illustrate the point) that has roughly 28 million values compared to a few thousand at most for even the most sophisticated predecessors. In all the years I have seen discussions on the fight between smart searchers and fast searchers, I have never seen anyone come close to imagining that is how enormous a difference it would take, much less realize and prove it.
It is pure genius.
"Tactics are the bricks and sticks that make up a game, but positional play is the architectural blueprint."