Leveling The Playing Feild

Discussion of anything and everything relating to chess playing software and machines.

Moderators: hgm, Rebel, chrisw

User avatar
Harvey Williamson
Posts: 2011
Joined: Sun May 25, 2008 11:12 pm
Location: Whitchurch. Shropshire, UK.
Full name: Harvey Williamson

Re: Leveling The Playing Feild

Post by Harvey Williamson »

Zach Wegner wrote:
Gerd Isenberg wrote:I completely agree with Gian-Carlo, a matter of principle - world champs are about the strongest combination of hardware and software (including book). I have sympathies for "on site" requirement though ;-)

Gerd
So who exactly did support this? The only programmer I know of is H.G., and he didn't seem to be all that vocal about it. Harvey Williamson said he was involved. Doesn't really seem too fair if no programmers at all were consulted, but that should be expected...
I have see emails from several programmers. It is not for me to name them but there are a mixture of for and against.
bob
Posts: 20943
Joined: Mon Feb 27, 2006 7:30 pm
Location: Birmingham, AL

Re: Leveling The Playing Feild

Post by bob »

CRoberson wrote:
bob wrote:They used to have the "on site" requirement for microcomputer entrants when the ICCA had two divisions. The longer the discussion continues, the more flaws will be exposed. That's why the _programmers_ voted to end the division idea several years ago. The ICCA resisted, for reasons unknown, but finally realized that the "division approach" was not workable.

Now they take several more steps backward by doing this...
I was there in 2002 when we voted for the change, but the discussion didn't
start out that way.

It started with people trying to redefine the term PC. I repeated the
age old suggestion of "anything that fits on top of a desk". It is a reasonable
open definition. But, some others didn't like it due to Chrilly's work in
FPGA systems. When the attempt to rule out FPGA systems failed,
the tide turned to abolishing the divisions.

We discussed the historical data which revealed there did not
exist a correlation between size of hardware and winning the tournament.
PC's were just as competitive as mainframes and supercomputers, so
it didn't make sense to have seperate tournaments. Then we voted
to drop the microcomputer championships and have the WCCC yearly.
I would not go _that_ far. Big iron dominated thru the 90's. So for 20+ years, almost 30, big iron was indeed a big advantage. You need look no futher than chess 4.x on the Cyber 176, then belle v1 with special hardware, then belle v2 with complete search in hardware, then Cray Blitz on a Cray supercomputer, and finally deep thought/deep blue. So faster is better, but it doesn't necessarily translate into a winning advantage, but it is certainly an advantage, particularly if you can actually use the hardware effectively...
bob
Posts: 20943
Joined: Mon Feb 27, 2006 7:30 pm
Location: Birmingham, AL

Re: Leveling The Playing Feild

Post by bob »

George Tsavdaris wrote:
bob wrote: First, the 100 Elo claim is nonsense.
How do you know for sure?
Because I understand parallel search as well as anyone around. We've already been thru this discussion once.
IMHO, the ones wanting this restriction are basically saying "I am not intelligent enough to develop a parallel/distributed search that works, and since I can't do it, I don't want anyone else to be able to use their fancy stuff that I don't know how to develop to be able to compete with them..."
This or they just can't afford so much money for having such a hardware.
several programs are university projects. They have plenty of good hardware available. Others have gotten local companies or whatever to provide loaner hardware. I never bought a Cray in my life, for example...
User avatar
fern
Posts: 8755
Joined: Sun Feb 26, 2006 4:07 pm

Re: Leveling The Playing Feild

Post by fern »

There are two difdferent things to consider here:
a) if we are looking at how much far a chess program can go, then it has sense to use the fastest computer at hand and see what happens.
b) if thinking in the common PC user, so the chess program customer, it is clear these tournament should be runned with the use of common, home use computers and not even with the very best of them, but with the average computer.

My bst
Fern
diep
Posts: 1822
Joined: Thu Mar 09, 2006 11:54 pm
Location: The Netherlands

Re: Leveling The Playing Feild

Post by diep »

Zach Wegner wrote:
Gerd Isenberg wrote:I completely agree with Gian-Carlo, a matter of principle - world champs are about the strongest combination of hardware and software (including book). I have sympathies for "on site" requirement though ;-)

Gerd
So who exactly did support this? The only programmer I know of is H.G., and he didn't seem to be all that vocal about it. Harvey Williamson said he was involved. Doesn't really seem too fair if no programmers at all were consulted, but that should be expected...
One professor on jerusalem-slippers being in favour and the rest against, that doesn't make it a very serious attempt. Note that our jerusalem-slipper professor is very consequent here always in this manner of making up his mind; he always is in favour of something except when mathematical proof exists that proves it is not possible to keep it in this format. I wonder though whether he's capable of ever making it to a tournament outside Benelux (the 3 nation entity to indicate belgium, netherlands and luxembourg), as he has no car. So what is not reachable "per pedem ambulare" (latin for on foot) is gonna be tough.

The other one in favour and who did do the proposal to Levy is an operator who got 2 CPU's from intel for free, the only ones where multiplier is unlocked from and which overclocks. It is still $1400 at pricewatch which makes 'em 1900 euro a piece or so in europe if you can get 'em here anyway. By now that QX9775 is total outdated as the new generation is entering the scene and will be there by may. AMD already is there with shanghai (goes up to 4 sockets) and intel will be there also soon with a 2 socket cpu (that will be 16 logical cores in total so not gonna get allowed either).

So the only machine that 'by accident' has the 8 highest clocked cores is that skulltrail with nitrogen cooled QX9775 clocked at 5.5Ghz or so that our friend in UK got for free from a sponsor.

Note that didn't help him previous 2 serious events that he played diep, diep being at 4 cores won 2 times from Hiarcs and is at 2-0.

Note i'm doing something total wrong as i never got hardware for free, and with me many others as well. It seems to be that a certain elite having a second job for a specific employer where dudes get everything for free.

Maybe i should try harder :)

Now where you guys might like that hardware (8 x 3.2Ghz and some overclock of it using some bigtime cooled liquid, preferably nitrogen) it is quite disgusting to me; first of all it is going to be some sort of outdated hardware you get forced to show up with which additionally also is really expensive.

Furthermore there is already many who started building clusters. Myself i had bought a 16 node QM400 with all equipment needed. Very tough to get to work and as of now i lack money to buy nodes for it (it has 0 nodes so far). Nodes i had planned were cheapo black edition AMD's (as those got unlocked multipliers) with some watercooling i hoped to get majority at 3.5Ghz or so. Watercooling kit is exactly 110 euro in the shop. You need unlocked multipliers for this.

Note that's nearly same speed like a nehalem 3.2Ghz for diep (official benchmark at intel testmachine).

For the price of a single nitrogen cooled skulltrail i can handsdown buy 8 nodes for that. That's 32 cores in total. Of course it eats a tad more power than the skulltrail (which in itself eats really too much), but well power is yet another discussion where even world leaders seem to be not very honest about.

The network was not real cheap when i bought it. Can i get my money back from ICGA as i had bought it to join world champs and win the title with it when diep had improved?

Not that this hardware is gonna be fastest, but well it only happens once each 10 years or so that the fastest hardware wins the event.

I also feel that some are misinformed here about this proposal being an attack at Rybka. Right now by accident rybka is strong, and by accident it had a fast SHARED MEMORY MACHINE last world championship. How to cover up NDA-ed hardware huh? - let's skip that part for now, every few years we see a cover up like that, only the N*SA (where * is element from {A, C, {}} ) seems to make a whole big soap out of it unlike its Israeli and European counterparts.

It is a fact that US programs always will usually show up with the biggest hardware, but the odds that this US program having the best hardware is best doesn't happen a lot. It happened by accident in 2008 what happens in 2009 is already total unclear. In fact i would reverse it. The reaction from the Rybka team was very professional to decline this 8 core limit. This where 2 events would benefit it a lot. First of all if everyone is 8 or less cores, odds is very big i'm at most 8 x 2.5Ghz (maybe), most will have a single socket overclocked Nehalem (if they would join the event). So that just gives Hiarcs and Rybka a 8 core at nearly 4 Ghz, shredder probably at 4 x 6Ghz nitrogen cooled Nehalem instead (and it being faster probably than those 8 cores practical by a large margin for shredder). So odds is biggest that rybka had the fastest hardware, at least on paper in such an event. For a program which passive form of chess means that it is a big necessity to outsearch opponents, that is a very nice thing to have of course.

Of course in short term it is always tempting to 'vote' for your own best. In my case for 2009 of course a 8 core event would be magnificent as Diep is improving really rapidly and as Rybka is strong now and probably has another 32 core @ 64 threads type NDA'ed Nehalem Xeon MP box by may 2009, those few elopoints (it is really less elopoints at a time control of 2 minutes a move than most people guess) could just turn the balance in each game; basically that's taking the book into account. If you have a great openingsline AND good hardware, that's a very strong combination nowadays to not lose the game (and if opponent messes up of course you win). So i already assume Noomen will show up with good lines there. Don't forget that non-hardware factor.

A far bigger advantage than that would not be its hardware, but having more datapoints to realtime modify book. Having 2 events that run near simultaneous, most will feel obliged to join both events. In that case Jeroen has an extra 'correction' possibility. We know from past tournaments what that means. Both Jeroen Noomen and Alexander Kure had this advantage to be 'on site' (or on the phone) when this happened. What history shows is that this is a major advantage for these book authors. They usually just repeat a line that was succesful just a few hours ago, without the opponents having had the time yet to recover from it (book authors far away usually are at work or there is time zone differences or whatever) and now they got 2 chances for that. As Alex isn't joining anymore, only Noomen can do that. The book authors of other engines usually are not there and Erdo, who usually doesn't join an event unpaid (sometimes he manages to get himself sponsored for an event - for same reason like how our UK friend got the cpu's for free), i'm not so sure he gets paid upcoming event - so no Erdo there. You are a pro or you aren't and pro's get paid, isn't it Erdo?

As i had already mailed around, the proposal is not finished. I had asked the question: "what is a core?". It basically means that the new Nehalem - it isn't factor 2 faster than the oldie hardware in fact it's same speed, 6.5% faster IPC for diep than core2 and thanks to crappy compilers for it only 13.5% faster in IPC than AMD (ipc means instructions per cycle is a measure of using equal Ghz clock to calculate back to), can only get used single socket as it has 8 cores. If we would see a single socket nehalem as 1 cpu with 4 cores, then the logical result would be that a AMD gpu (a new one is about to get released now) is having 1 core, as the rest splits itself into n 'manycores', and most nvidia's you only can count thread blocks as being cores as the rest is a parallel execution triggered by 1 instruction basically. So all GPU's and even Tesla is allowed in that case.
Tesla has 240 stream cores that can get used for computerchess.
Additonally a GTX280 card is just 399 dollar at newegg and a Tesla is a 1000 dollar more, so these cards beat the skulltrail by big factors in price.

Anyway it is all theoretic talk.

We had a triennal meeting a few years ago where we had decided to go back to just 1 title and 1 tournament and make that tournament the open hardware platform.

Objectively David is not even allowed to modify the format just like that without a triennal meeting first deciding on this, as a few years ago that same meeting had designated this event as being the open world championships with the belonging title to it. Just stripping the world champs title away from it would be very bad.

I assume however, considering the immaturity of the proposal, that David himself knew better than anyone else that this proposal wouldn't make it.

The question is relevant therefore why it was shipped around. When i received the email i had to laugh loud therefore. A lot of discussion as usual will be needed now to get rid of it i assume.

There were back then many reasons to go to an open hardware platform. This after having 1 world championship where we had both an 'open hardware' and microtitle for single cpu's at stake (back in 2001 or so?).
I still remember the junior teams reaction very well from back then.

"Of course we go for the open title"

Pronounced in a manner that made it very sure that they found everyone not going for it a total idiot. Amir sure has some marketing sense.

From marketing viewpoint a lobotomized tournament, that doesn't even allow microcomputers is total pathetic.

Note that it is true that having the fastest microcomputer is a big advantage. Yet would you want to miss that chance to beat that supercomputer when it shows up that day in a world champs?

Please let hydra join when it has the guts.

Besides that it gets outsearched pathetically by everyone at shared memory pc's, like i got reported that Toga got 25 ply in middlegame at that 24 core Dunnington (same organisation again covering up it was a dunnington 24 core box huh? Just because not many here realize how tough it is to "quickly" convert toga to a clusterversion). If we compare this 24 core SHARED MEMORY Toga box with Hydra's 64 FPGA's getting 20 ply in middlegame, it is obvious what is better.

The thing is you know, most chessprogram authors can only get good hardware for a world championship and are best prepared for world championships and in most cases not at all for all other events.

Now this proposal would want to end this 'good hardware' as well?

Finally of course splitting into 2 events is the worst of everything. Knowing ICGA that means it will get sold. We get to buy 2 tickets suddenly. 1 to get to the olympiads and one to get to some university nerd who overpays for having a world champs computerchess.

So ICGA instead of selling 1 event suddenly they can sell 2 events. What might happen in such case is the olympiads going to some Asian nation forever and the world champs computerchess to some European nation. This splitup happened in 2005 as well. It was bad. I would find such split most disgusting if it happens.

Much better than all this is just have 1 event, open hardware, together with the olympiads and AT THE SAME LOCATION.

Probably unwanted Harvey now gets used for this political game. Whatever happens, Harvey gets the blame, which would be bad as well as he's a nice guy. The only mistake he made a while ago was asking intel for a skulltrail and 2 QX9775. He should've asked intel instead for a NDA'ed 40 core box.

Vincent
Rémi Coulom
Posts: 438
Joined: Mon Apr 24, 2006 8:06 pm

Re: Leveling The Playing Feild

Post by Rémi Coulom »

Hi,

This is a letter I have just sent to David:
Dear David,

I am surprised by your message about limiting computing power to 8 cores in the WCCC. I have received an email by an angry chess programmer about it.

Regardless of whether it is a good decision or not, the method for communicating it is very bad. First, as the ICGA programmers representative, I think I should have been consulted about this rule before. Also, this is how your letter reads to a programmer experienced in the wooden language of ICGA officials:

"Some programmers are frustrated because they cannot beat Rybka, and are lobbying behind the scene in order to limit her computing power. So, for plenty of good reasons why it is not fair to have clusters playing against ordinary PCs, the WCCC will be limited to eight cores. It turns out that the same very good reasons also apply to the Computer Olympiad. But wait: Jaap van den Herik is trying to win the Go tournament with the Dutch supercomputer, and big computer companies with a lot of money are trying to get advertisement out of Go programs. So we will find a reason to allow supercomputers in the Go tournament."

It turns out rather ridiculous, and is irritating a lot of people, as you can read in the Computer Chess Club:
http://www.talkchess.com/forum/viewtopic.php?t=25458
or the Hiarcs forum:
http://www.hiarcs.net/forums/viewtopic.php?t=2008

I would like to suggest that such a decision should be taken after a more open discussion with programmers.

Regards,

Rémi
Rémi
Erik Roggenburg

Re: Leveling The Playing Feild

Post by Erik Roggenburg »

Just about every single form of racing has some sort of restrictions - NASCAR, Top-fuel dragsters, Indy cars, F1, etc. Why not chess? Is the WCCC supposed to reward the guy with the biggest hardware, or the guy with the best combo of book, engine, and tweaked out hardware?

So what if they limit to 8 cores? It isn't as though everyone will show up with identical hardware. Some will be OC'd out the yin-yang, so I think this will lead to true teams: Programmer, Book Cooker, Hardware Guru, etc.
bob
Posts: 20943
Joined: Mon Feb 27, 2006 7:30 pm
Location: Birmingham, AL

Re: Leveling The Playing Feild

Post by bob »

I am not sure what you are referring to, but I do not believe, for a minute, that Rybka was running on a 40 core shared memory box. In the last ACCA event, they used their box and the output was very revealing as to what was happening. It certainly appeared to be a classic "split at the root" cluster program. Each node spat out at least one PV for every iteration. But the iterations were not synchronized, and we were getting conflicting scores from each node all over the place.

In one game, Crafty had played QxQ and there was only one way to re-capture. We were seeing scores kibitzed that showed Rybka being a queen down. When I started looking at the output carefully, it was clear that each node was given several moves (root moves) to search independently, and the ones searching the non-recapture move were all producing worthless information with scores of -9.x and foolish PVs that did not show the recapture. Analysis showed why. And it led to my conclusion that this hardware / software offers nothing at all until you get to some sort of human-assisted chess where the human looks at multiple PVs and chooses what he believes is the best move based on the scores he sees plus his own intuition.

So, in short, I don't believe they were using exotic NDA hardware, at least in the ACCA event. If so, it has to be the worst SMP parallel search I have ever ween where one node kibitzed a 19 ply score and depth, followed by another node kibitzing a 22 ply score and depth, followed by another node kibitzing a 20 ply depth and score. And many of the scores were obviously produced by searching a move that would never be played and which would never produce a PV in a normal SMP-type parallel search algorithm...

So what exactly are you talking about here with the NDA hardware comments???
lexdom

A Compromise?

Post by lexdom »

Nid Hogge wrote:It doesn't matter, the whole purpose is for them handicap Rybka(or any other program out there that is going beat them silly and make the WCCC completely irrelevant) in any possible way, so when they do win the tourney, they'll have something big and shiny to stick to they're product boxes and websites. Just like the overly lying messege on hiarcs.com website. "HIARCS wins only major tournament of 2008 with ALL top chess software competing.." Yes.. Right!
I looked at the hiarcs site and maybe a compromise can be made. An alternative is a single tournament, with two titles. One for "Open Champion" and the other for "Single-Computer Champion".

http://www.hiarcs.com/

HIARCS wins only major tournament of 2008 with ALL top chess software competing.

Recent Tournaments:
HIARCS top single-computer in 28th Dutch Open Computer Chess Championship, Leiden, The Netherlands, November 2008
HIARCS top single-computer in 16th World Computer Chess Championship, Beijing, China, October 2008
HIARCS wins 17th International Thüringer Computer Chess Championship, Germany, May 2008
HIARCS wins 17th International Paderborn Computer Chess Championship, Paderborn, Germany, December 2007
diep
Posts: 1822
Joined: Thu Mar 09, 2006 11:54 pm
Location: The Netherlands

Re: Leveling The Playing Feild

Post by diep »

Bob, ACCA was the cover up. I already has loudly said around and it hit the press nearly that it was a shared memory box at world champs. Both Toga and Rybka and Hiarcs. Rybka and Toga, both so called 'clusters' searched a zillion plies deeper there. If you give the programmer of the engine the job: "cover it up for this program".

And you program for 3 hours or so then the fastest way to do a so called 'multinode' search over a network is just split the first few moves, which we all know is worst most horrible manner of splitting.

Further Rybka had first choice. There was 3 engines operated by the same organisation: Rybka, Toga and Hiarcs. Hiarcs (using rybka's old box of 8x4Ghz, that's what Harvey told me, his own at home is 8 x 3.6Ghz namely), Toga using the 24 core dunnington box (also so called "cluster" suddenly yet getting a lot of plies deeper and no one has ever seen a 'cluster' version of toga of course that can search 25 ply in middlegame where 4-8 core opponents got like 19-20 ply).

Toga wasn't covered up, though on paper it also was a 'cluster'. Yeah one that by accident was of the same size of a 24 core dunnington.

Junior ran on a 24 core dunnington box (they said so i heard from the tournament hall) and whatever you say, also when they run on 'secret' hardware like they probably did in 2006, they never lie about it. They just refuse to say what it is.

The point is: rybka would have run on that 24 core dunnington box if they wanted to. Yet they had something faster: a 40 core shared memory box.

So the lame excuse invented within 30 seconds was telling everyone that Toga and Rybka ran on a cluster. Yet you and i know that was total ballony.
We're not gonna see any soon a cluster version of toga that can work on a cluster of course AND get a 25 ply search, plies deeper than its 4-8 core opponents hahahahaha. They just do not understand parallel programming at a cluster very well as they know very little about it.

The cover up started weeks later. You know how this works in those organisations. There is IQ100 guys who just make a bigger mess out of things it is called: "plausible deniability". Suppose intel or some AMD (or whatever brand) gets angry. Starting with ACCA and that rybka there searched really plies less deeply and showed very inconsequent mainlines unlike the worldchamps version...

In any case that's not the issue here. Issue is a stupid proposal.
bob wrote:I am not sure what you are referring to, but I do not believe, for a minute, that Rybka was running on a 40 core shared memory box. In the last ACCA event, they used their box and the output was very revealing as to what was happening. It certainly appeared to be a classic "split at the root" cluster program. Each node spat out at least one PV for every iteration. But the iterations were not synchronized, and we were getting conflicting scores from each node all over the place.

In one game, Crafty had played QxQ and there was only one way to re-capture. We were seeing scores kibitzed that showed Rybka being a queen down. When I started looking at the output carefully, it was clear that each node was given several moves (root moves) to search independently, and the ones searching the non-recapture move were all producing worthless information with scores of -9.x and foolish PVs that did not show the recapture. Analysis showed why. And it led to my conclusion that this hardware / software offers nothing at all until you get to some sort of human-assisted chess where the human looks at multiple PVs and chooses what he believes is the best move based on the scores he sees plus his own intuition.

So, in short, I don't believe they were using exotic NDA hardware, at least in the ACCA event. If so, it has to be the worst SMP parallel search I have ever ween where one node kibitzed a 19 ply score and depth, followed by another node kibitzing a 22 ply score and depth, followed by another node kibitzing a 20 ply depth and score. And many of the scores were obviously produced by searching a move that would never be played and which would never produce a PV in a normal SMP-type parallel search algorithm...

So what exactly are you talking about here with the NDA hardware comments???