A fair fight

Discussion of anything and everything relating to chess playing software and machines.

Moderators: hgm, Dann Corbit, Harvey Williamson

Forum rules
This textbox is used to restore diagrams posted with the [d] tag before the upgrade.

Who would win?

Poll ended at Sat Dec 09, 2017 9:33 pm

Houdini 6 by >60%
1
4%
Both between 40-60%
8
32%
Alpha Zero >60%
16
64%
 
Total votes: 25

Werewolf
Posts: 1349
Joined: Thu Sep 18, 2008 8:24 pm

A fair fight

Post by Werewolf » Fri Dec 08, 2017 9:33 pm

Alpha Zero has done something very impressive, but the details of Stockfish are disturbing me to say the least. It seems to be a publicity stunt. I would like to see a fairer fight and I propose:

i) Latest Houdini Dev (so Google can't practice with something that's available in the case of asmfish etc.)
ii) Fastest Dual CPU Xeon available with HT OFF
iii) Plenty of hash (32GB / 64GB - whatever Robert wanted)
iv) Tournament quality opening book for Houdini
v) A time control like G90 + 30 sec so time can be allocated more intelligently.

Alpha Zero would be the same as just played.

Personally I think this would level the match right up. But what do you think?

Leo
Posts: 996
Joined: Fri Sep 16, 2016 4:55 pm
Location: USA/Minnesota
Full name: Leo Anger

Re: A fair fight

Post by Leo » Fri Dec 08, 2017 9:43 pm

If the Cerebellum Brainfish people would release their program publically we could train it around the clock on a server and build our own chess engine book. I have played with cerebellum and it goes very deep. Give SF Dev 128 GB or more of DDR4 4000 Mhz Ram and 30 overclocked cores plus 7 man EGTB. Maybe use a liquid nitrogen setup if that would help however that works. The mere 3 losses with black would be eliminated and maybe turned to wins. The 25 losses would be chipped away at. Much more drawn and some would be wins. Deep Mind or whatever it is called has accomplished something amazing and shaken up the chess computer world but they have not conquered the chess engine world until further notice. Why in the world didn't they let Deep Zero run for a week and learn. I think Milos might be right. After 4 hours it maxed out. It was saturated.
Advanced Micro Devices fan.

Leo
Posts: 996
Joined: Fri Sep 16, 2016 4:55 pm
Location: USA/Minnesota
Full name: Leo Anger

Re: A fair fight

Post by Leo » Fri Dec 08, 2017 10:00 pm

Hikaru Nakamura: "I think the research is certainly very interesting; the concept of trying to learn from the start without any prior knowledge so certainly it's a new approach and it worked quite well obviously with go. It's definitely interesting. That being said, having looked at the games and understand[ing] what the playing strength was I don't necessarily put a lot of credibility in the results simply because my understanding is that AlphaZero is basically using the Google super computer and Stockfish doesn't run on that hardware; Stockfish was basically running on what would be my laptop. If you wanna have a match that's comparable you have to have Stockfish running on a super computer as well."
Advanced Micro Devices fan.

Ras
Posts: 1859
Joined: Tue Aug 30, 2016 6:19 pm
Full name: Rasmus Althoff
Contact:

Re: A fair fight

Post by Ras » Fri Dec 08, 2017 10:30 pm

Leo wrote:Hikaru Nakamura: "If you wanna have a match that's comparable you have to have Stockfish running on a super computer as well."
In other words, Nakamura doesn't have the slightest clue what's going on, technically.

Werewolf
Posts: 1349
Joined: Thu Sep 18, 2008 8:24 pm

Re: A fair fight

Post by Werewolf » Fri Dec 08, 2017 10:31 pm

I am staggered anyone thinks Alpha Zero would get over 60%

User avatar
kranium
Posts: 1964
Joined: Thu May 29, 2008 8:43 am

Re: A fair fight

Post by kranium » Fri Dec 08, 2017 10:42 pm

I guess to be really fair, you'd have to require the SF fishtest team to develop and test a new eval in 4 hours...

abulmo2
Posts: 332
Joined: Fri Dec 16, 2016 10:04 am
Contact:

Re: A fair fight

Post by abulmo2 » Fri Dec 08, 2017 10:46 pm

kranium wrote:I guess to be really fair, you'd have to require the SF fishtest team to develop and test a new eval in 4 hours...
After setting their eval terms to random values first of course.
Richard Delorme

Milos
Posts: 4046
Joined: Wed Nov 25, 2009 12:47 am

Re: A fair fight

Post by Milos » Fri Dec 08, 2017 10:56 pm

kranium wrote:I guess to be really fair, you'd have to require the SF fishtest team to develop and test a new eval in 4 hours...
No problem at all, just provide them access to Sunway TaihuLight to level the playing field.
Actually SF can use LazyEval only, just use Sunway TaihuLight for 4 hours for automated Cerebellum development.

syzygy
Posts: 5000
Joined: Tue Feb 28, 2012 10:56 pm

Re: A fair fight

Post by syzygy » Fri Dec 08, 2017 10:58 pm

Werewolf wrote:But what do you think?
I think that it is completely irrelevant whether some AlphaZero prototype is or is not stronger than a perfectly tuned Stockfish setup.

What is important is that, apparently, the general level of play of current top engines can be reached (and most likely be far exceeded) by an approach to computer chess that is completely different than how all leading engines have worked since Claude Shannon wrote the first paper on computer chess.

And this approach is not just completely different from a programming point of view... it does not even need any programming (apart from the initial programming of the AlphaZero software and a bit of cleverness to adapt it to the rules of chess). They just decide how much hardware they want to throw at a problem, they push a button, and some hours later the thing has programmed itself. That is really superhuman.

Milos
Posts: 4046
Joined: Wed Nov 25, 2009 12:47 am

Re: A fair fight

Post by Milos » Fri Dec 08, 2017 11:04 pm

syzygy wrote:
Werewolf wrote:But what do you think?
I think that it is completely irrelevant whether some AlphaZero prototype is or is not stronger than a perfectly tuned Stockfish setup.

What is important is that, apparently, the general level of play of current top engines can be reached (and most likely be far exceeded) by an approach to computer chess that is completely different than how all leading engines have worked since Claude Shannon wrote the first paper on computer chess.

And this approach is not just completely different from a programming point of view... it does not even need any programming (apart from the initial programming of the AlphaZero software and a bit of cleverness to adapt it to the rules of chess). They just decide how much hardware they want to throw at a problem, they push a button, and some hours later the thing has programmed itself. That is really superhuman.
It's not as easy as that. The actual question is how many (paid!) man hours have been used to actually develop MCTS, optimize it, optimize training algorithm, find optimal feature set, develop specialized super hardware that was used (BTUs), etc.

Post Reply