Hardware vs Software

Discussion of chess software programming and technical issues.

Moderator: Ras

User avatar
michiguel
Posts: 6401
Joined: Thu Mar 09, 2006 8:30 pm
Location: Chicago, Illinois, USA

Re: Hardware vs Software

Post by michiguel »

bob wrote:
michiguel wrote:
bob wrote:
Don wrote:
Dirt wrote:
Don wrote:
Dirt wrote:
bob wrote:First let's settle on a 10 year hardware period. The q6600 is two years old. If you want to use that as a basis, we need to return to early 1997 to choose the older hardware. The Pentium 2 (Klamath) came out around the middle of 1997, which probably means the best was the Pentium pro 200. I suspect we are _still_ talking about 200:1

This is not about simple clock frequency improvements, more modern architectures are faster for other reasons such as better speculative execution, more pipelines, register renaming, etc...
Correct me if I'm wrong, but in moving to a time handicap you seem to be ignoring the parallel search inefficiency we were both just explaining to Louis Zulli. Shouldn't that be taken into account?
None of this will matter unless it's really a close match - so I would be prepared to simple test single processor Rybka vs whatever and see what happens. If Rybka loses we have a "beta cut-off" and can stop, otherwise we must test something a little more fair and raise alpha.
If the parallel search overhead means that the ratio should really be, say, 150:1 then I don't think Rybka losing really proves your point. If there should be such a reduction, and how large it should be, is a question I am asking.
So if Rybka loses with say a 32 to 1 handicap you are saying that we should give her even less time to see if she still loses?
This is going around in circles. It is easy to quantify the hardware. I'd suggest taking the best of today, the intel I7 (core-3) and the best of late 1998. Limit it to a single chip for simplicity, but no limit on how many cores per chip. I believe this is going to be about a 200:1 time handicap to emulate the difference between the 4-core core-3 from intel and the best of 1998, which was the PII/300 processor.

For comparison, crafty on a quad-core I7 runs at 20M nodes per second, while on the single-cpu PII/300 was running at not quite 100K nodes per second. A clean and simple factor of 200x faster hardware over that period (and again, those quoting moore's law are quoting it incorrectly, it does _not_ say processor speed doubles every 2 years, it says _density_ doubles every 2 years, which is a different thing entirely). Clock speeds have gone steadily upward, but internal processor design has improved even more. Just compare a 2.0ghz core2 cpu against a 4.0ghz older processor to see what I mean.)

so that fixes the speed differential over the past ten years with high accuracy. Forget the discussions about 50:1 or the stuff about 200:1 being too high. As Bill Clinton would say, "It is what it is." And what it is is 200x.

That is almost 8 doublings, which is in the range of +600 Elo. That is going to be a great "equalizer" in this comparison. 200x is a daunting advantage to overcome. And if someone really thinks software has produced that kind of improvement, we need to test it and put it to rest once and for all...

I will accept that a program today running on 4 cores will see some overhead due to the parallel search. But I don't think it is worth arguing about whether we should scale back the speed because of the overhead. That is simply a software issue as well, as it is theoretically possible to have very little overhead. If the software can't quite use the computing power available, that is a software problem, not a hardware limit.
Then you have to accept that Fritz 5 is 622 Elo points below Rybka in current hardware. That is a bit more than the 600 points you estimate harwdare provided in 10 years.

Miguel
I don't accept that at all. That's why I suggested we run a test rather than using ratings that are very old and out of date. how many games has fritz 5.32 played _recently_ on the rating lists? That makes a huge difference and it might be better now since it is still going to beat the top programs on occasion, and with them so much higher its rating would likely drag up as well.
What do you find so wrong about these data?

Code: Select all

CCRL 40/4 Rating List - Custom engine selection
388092 games played by 744 programs, run by 12 testers
Ponder off, General books (up to 12 moves), 3-4-5 piece EGTB
Time control: Equivalent to 40 moves in 4 minutes on Athlon 64 X2 4600+ (2.4 GHz)
Computed on January 10, 2009 with Bayeselo based on 388'092 games
Tested by CCRL team, 2005-2009, http://computerchess.org.uk/ccrl/404/

Rank                 Engine                   ELO   +    -   Score  AvOp  Games
1                   Fritz 5.32              2642  +13  -13  53.2%  -24.5  2132
Miguel
So let's run the test rather than speculating...


I have some Crafty versions that should be right for that time frame. Crafty 15.0 was the first parallel search version. I suspect something in the 16.x versions or possibly 17.x versions was used at the end of 1998. Crafty ran on a quad pentium pro early in 1998 whe
n version 15.0 was done...
Dirt
Posts: 2851
Joined: Wed Mar 08, 2006 10:01 pm
Location: Irvine, CA, USA

Re: Hardware vs Software

Post by Dirt »

bob wrote:On 4 cpus, the last time this was carefully measured and discussed here, I ran a _ton_ of tests and put the results on my ftp box. Martin F. took the data and computed a raw speedup of 3.4x on that data even though I usually claim 3.1x in a general case. The hardware of today, in terms of instructions per second is about 200x better than in 1998. That seems to be the only reasonable assumption one can work with. earlier it seemed everyone was convinced that the hardware improvement was _far_ smaller than the software improvement. Now we have to quibble about parallel search overhead?

:)

Can't have it both ways. At worst this is a 25% error. More likely it is a 10-12% error. Do you believe that is significant when comparing hardware vs software???
I said hardware and software were close to equal, not that that hardware was far smaller. Your best estimates seem to imply a hardware factor of 170 times. Why not just use that, instead of quibbling over it?
Uri Blass
Posts: 10682
Joined: Thu Mar 09, 2006 12:37 am
Location: Tel-Aviv Israel

Re: Hardware vs Software

Post by Uri Blass »

bob wrote:
Uri Blass wrote:
bob wrote:
Dirt wrote:
bob wrote:First let's settle on a 10 year hardware period. The q6600 is two years old. If you want to use that as a basis, we need to return to early 1997 to choose the older hardware. The Pentium 2 (Klamath) came out around the middle of 1997, which probably means the best was the pentium pro 200. I suspect we are _still_ talking about 200:1

This is not about simple clock frequency improvements, more modern architectures are faster for other reasons such as better speculative execution, more pipelines, register renaming, etc...
Correct me if I'm wrong, but in moving to a time handicap you seem to be ignoring the parallel search inefficiency we were both just explaining to Louis Zulli. Shouldn't that be taken into account?
I don't see why. I used the same parallel search 10 years ago that I use today, the overhead has not changed.

The main point both Don and I have _tried_ to make is that given a certain class of hardware, one is willing or able to do things that are not possible on slower hardware. In Cray Blitz, we specifically made use of vectors, and that gave us some features we could use that would be too costly in a normal scalar architecture. So there are three kinds of improvements over the past 10 years.

1. pure hardware

2. pure software

3. hybrid improvements where improved hardware gave us the ability to do things in software we could not do with previous generations of hardware due to speed issues...
Maybe you use the same parallel search 10 years ago that you use today
but I think that other improved their parallel search so I guess that better parallel search is software improvement unless Crafty is the best software of the beginning of 1999.

I also wonder how you can be sure that you used efficient parallel search for more than 8 cores with Crafty when you even had not the possibility to use 8 cores to test in 1999.

The reason that I suggested to use 8 cores for software with equivalent strength to Fritz of january 1999(when you suggested to use top hardware of today) is that I thought that software of that time could not use more than 8 cores but if you insist on more than 8 cores then I have no objection in case that you also use software of the same time and not something that I consider to be equivalent on 1 core(but not with more than 8 cores).

If you think that old Crafty of january 1999 is the best software of january 1999 on big hardware because it can use many processors efficiently then I have no problem with using it in the test or even with using better hardware for it
in case that it can use it.
I do not believe _any_ commercial program has a better parallel search than what is in Crafty, and what has been in it for 10+ years. There have been changes made, but in 1998 there was no NUMA hardware (AMD started this in the X86 world) so the more recent NUMA-related stuff is completely irrelevant to the 1998 discussion or even todays intel core-3 (I7) processor... In that light, Crafty's parallel search today is almost identical to what it was in 1998.

Again, I have suggested (a) P2/300 single-chip since that was the best available at the end of 1998. And for today, the latest is the Intel Core-3 (I7). the raw speed difference between the two, using Crafty as a benchmark, is a factor of 200. If you want to measure a set of positions with time-to-solution used rather than raw NPS, then today's hardware is around 170-175 times faster. Note this is a single-chip discussion, where in actuality, I can put together far larger systems today than I could in 1998. I elected to keep this simple by using a single chip, which is actually a little unfair to the "hardware side".

The best chip of 1998 was a single core. The best of today is a quad-core. In 1998 you could buy a dual-chip p2/300 if you wanted, I had one. Today you can easily buy a quad core-3 box with 16 cores, which is yet _another_ factor of two. So we should maybe go with a factor of 340:1 rather than 170:1. And larger configurations are available from places like Sun, etc. So we could make that 1,000:1 if you want..


You are talking yourself into one hell of a deep hole here. I believe that with 340:1 time odds, we could take an old buggy gnuchess and give Rybka absolute fits...

I have not studied this much, but I started a test earlier today giving glaurung 1+1, and crafty 100+100 for the time controls. I completed 450 games before quitting, and the result was one draw, the rest wins. with 450 games, if crafty was less than 600 Elo better than Glaurung 2, I would have expected about 1 loss out of every 64 games. For 800 I would expect 1 loss for every 256 games. one loss out of 900 games gives an idea of just what 100:1 does... I'm not sure this experiment will really be that interesting. And considering it should be either 170 or 340 depending on which hardware we consider, it only gets worse.

I have not checked the SSDF to see where G2 (most recent version) compares to Rybka. But it had better be at least 600 worse or this is not going to be so interesting IMHO.

But I do think it would be interesting to assess Elo gain for hardware vs for software, just so we would know. I know the programmers want to take credit for most of the gains. But I'll still bet that the engineers are responsible for the biggest gain...
I do not agree with the another factor of 2 that you mention because
you could get a quad and not a dual in 1998.

I also do not agree that today you can easily buy a quad core-3 box with 16 cores.

testing organizations like CCRL do not use more than a quad and based on the posts that I read I see only few people who use an octal(8 cores and not 16 cores).

It seems to me that 4 cores was top hardware of 1998-1999 when 16 cores is top hardware of 2008-2009 and I think that most software of 1998-1999 could not use 16 cores efficiently.


For your comment:
"And larger configurations are available from places like Sun, etc. So we could make that 1,000:1 if you want"

In this case you should also use some super computer also for 1998-1999
software(remember that deeper blue is from 1997 and I doubt if you will find today something that is even 100 times faster)

Uri
Uri Blass
Posts: 10682
Joined: Thu Mar 09, 2006 12:37 am
Location: Tel-Aviv Israel

Re: Hardware vs Software

Post by Uri Blass »

Don wrote:
michiguel wrote:
Don wrote:
Dirt wrote:
Don wrote:I'm sure the only 10 year time period that THEY would be interested in would have to include Rybka. I think most of their confidence is pinned on this single program and also their only hope, as slight as it is.

To me it's more reasonable to include a more representative 10 year period, not one that saw the introduction of an uncharacteristically strong program. For them, a bad 10 year period might just barely include the introduction of brand new hardware from Intel.

So the precise time period you settle on does matter - however I think 10 years is long enough to smooth out the bumps enough to not matter.
I'm not saying, and I think Uri has already disclaimed, that software improvements have always outpaced those of hardware. That this may be true mostly because of Rybka I don't deny. I do think that many people outside of the computer chess community underestimate the magnitude of the software improvements that have taken place over the years, which makes me a bit defensive on behalf of the software authors.
Yes, I agree completely on this. In the minds of many outsiders chess is a "brute force" hardware thing.
Rybka + today's hardware vs. Fritz 5 (top machine Dec 1998) + Today's hardware means an Elo difference of 622 points, to come back to something Uri mentioned or suggested.

That is 622 Elo points in exactly 10 years purely due to software. Can we say that hardware is granting the same amount?
If doubling the speed is 75 Elo points, 600 Elo points means 2^8 = 256 faster hardware needed to account for this. If Bob is right about 1:200 (which means doubling the speed every 1 year 3-4 months, beating moore's law) hardware is close to that. We can discuss 50 or 100 elo points more or less here and there, but the conclusion will be that the improvement due to software and hardware are equal or very close, going hand to hand.

Miguel
You guys always find a way to give Rybka a big advantage. I really don't believe a program developed 10 years ago should be expected to run well on hardware that will not exist for 10 years. Even if you just consider that it was compiled on a 10 year old compiler with the wrong optimizations for this platform and that it wasn't designed for 64 bit computers because there were not common enough, that's a pretty big hit. (I suppose you could consider compiler quality a software issue but not compiler based optimizations.)

I seriously doubt Fritz was tuned to run at a time control of 1 move every 10 hours either. You might claim this is a software issue but it's a tuning issue. Even with these factors I have no doubt that Rybka would still be much stronger.

And despite the math you are using I do not see Rybka 1 second beating Fritz at 200 seconds. I would definitely have to see this.
If Fritz does not run well at time control of 10 hours/1 moves then
it is certainly a software issue.

You are free to try to find a better software from that time.

The speed difference of ssdf is not 200:1

I do not remember exact numbers but
I remember that speed difference was something like

K6 450 mhz:P200 2.5:1
A1200:K6450 3:1
Q6600:A1200 10:1

so my estimate is 75:1

if you talk about 1 second of rybka against 200 seconds of Fritz5.32 then result is dependent on the hardware.

faster hardware can help rybka to perform better and I do not claim that rybka is going to win in this case(I did not bring the 200:1 numbers).
User avatar
Laskos
Posts: 10948
Joined: Wed Jul 26, 2006 10:21 pm
Full name: Kai Laskos

Re: Hardware vs Software

Post by Laskos »

Uri Blass wrote:
If Fritz does not run well at time control of 10 hours/1 moves then
it is certainly a software issue.

You are free to try to find a better software from that time.

The speed difference of ssdf is not 200:1

I do not remember exact numbers but
I remember that speed difference was something like

K6 450 mhz:P200 2.5:1
A1200:K6450 3:1
Q6600:A1200 10:1

so my estimate is 75:1

if you talk about 1 second of rybka against 200 seconds of Fritz5.32 then result is dependent on the hardware.

faster hardware can help rybka to perform better and I do not claim that rybka is going to win in this case(I did not bring the 200:1 numbers).
Yes 2^8 is pretty much what Rybka can stand on Fritz 5.32, this means some 550 points. But real hardware improvement is 2^6, Rybka will win on this time controls (64:1). In Rybka's case the software improvement is larger than hardware over Fritz 5.32 in 1999.

Kai
bob
Posts: 20943
Joined: Mon Feb 27, 2006 7:30 pm
Location: Birmingham, AL

Re: Hardware vs Software

Post by bob »

Laskos wrote:
Uri Blass wrote:
If Fritz does not run well at time control of 10 hours/1 moves then
it is certainly a software issue.

You are free to try to find a better software from that time.

The speed difference of ssdf is not 200:1

I do not remember exact numbers but
I remember that speed difference was something like

K6 450 mhz:P200 2.5:1
A1200:K6450 3:1
Q6600:A1200 10:1

so my estimate is 75:1

if you talk about 1 second of rybka against 200 seconds of Fritz5.32 then result is dependent on the hardware.

faster hardware can help rybka to perform better and I do not claim that rybka is going to win in this case(I did not bring the 200:1 numbers).
Yes 2^8 is pretty much what Rybka can stand on Fritz 5.32, this means some 550 points. But real hardware improvement is 2^6, Rybka will win on this time controls (64:1). In Rybka's case the software improvement is larger than hardware over Fritz 5.32 in 1999.

Kai
I do not see where this "statement of fact" comes from. In 1998 the best hardware was a 300mhz PII. That provided a limit to what we could do in an engine without sacrificing too much in terms of search depth. As the hardware has improved, it has given us the ability to do things we could not do before, without sacrificing so much depth that we killed tactical skill.

For those that don't get it, "software development" is not a hardware-independent activity. The hardware improvements lead to software improvements. I can't understand how this is getting overlooked, it is taught in every CS program I have seen.
bob
Posts: 20943
Joined: Mon Feb 27, 2006 7:30 pm
Location: Birmingham, AL

Re: Hardware vs Software

Post by bob »

Uri Blass wrote:
bob wrote:
Uri Blass wrote:
bob wrote:
Dirt wrote:
bob wrote:First let's settle on a 10 year hardware period. The q6600 is two years old. If you want to use that as a basis, we need to return to early 1997 to choose the older hardware. The Pentium 2 (Klamath) came out around the middle of 1997, which probably means the best was the pentium pro 200. I suspect we are _still_ talking about 200:1

This is not about simple clock frequency improvements, more modern architectures are faster for other reasons such as better speculative execution, more pipelines, register renaming, etc...
Correct me if I'm wrong, but in moving to a time handicap you seem to be ignoring the parallel search inefficiency we were both just explaining to Louis Zulli. Shouldn't that be taken into account?
I don't see why. I used the same parallel search 10 years ago that I use today, the overhead has not changed.

The main point both Don and I have _tried_ to make is that given a certain class of hardware, one is willing or able to do things that are not possible on slower hardware. In Cray Blitz, we specifically made use of vectors, and that gave us some features we could use that would be too costly in a normal scalar architecture. So there are three kinds of improvements over the past 10 years.

1. pure hardware

2. pure software

3. hybrid improvements where improved hardware gave us the ability to do things in software we could not do with previous generations of hardware due to speed issues...
Maybe you use the same parallel search 10 years ago that you use today
but I think that other improved their parallel search so I guess that better parallel search is software improvement unless Crafty is the best software of the beginning of 1999.

I also wonder how you can be sure that you used efficient parallel search for more than 8 cores with Crafty when you even had not the possibility to use 8 cores to test in 1999.

The reason that I suggested to use 8 cores for software with equivalent strength to Fritz of january 1999(when you suggested to use top hardware of today) is that I thought that software of that time could not use more than 8 cores but if you insist on more than 8 cores then I have no objection in case that you also use software of the same time and not something that I consider to be equivalent on 1 core(but not with more than 8 cores).

If you think that old Crafty of january 1999 is the best software of january 1999 on big hardware because it can use many processors efficiently then I have no problem with using it in the test or even with using better hardware for it
in case that it can use it.
I do not believe _any_ commercial program has a better parallel search than what is in Crafty, and what has been in it for 10+ years. There have been changes made, but in 1998 there was no NUMA hardware (AMD started this in the X86 world) so the more recent NUMA-related stuff is completely irrelevant to the 1998 discussion or even todays intel core-3 (I7) processor... In that light, Crafty's parallel search today is almost identical to what it was in 1998.

Again, I have suggested (a) P2/300 single-chip since that was the best available at the end of 1998. And for today, the latest is the Intel Core-3 (I7). the raw speed difference between the two, using Crafty as a benchmark, is a factor of 200. If you want to measure a set of positions with time-to-solution used rather than raw NPS, then today's hardware is around 170-175 times faster. Note this is a single-chip discussion, where in actuality, I can put together far larger systems today than I could in 1998. I elected to keep this simple by using a single chip, which is actually a little unfair to the "hardware side".

The best chip of 1998 was a single core. The best of today is a quad-core. In 1998 you could buy a dual-chip p2/300 if you wanted, I had one. Today you can easily buy a quad core-3 box with 16 cores, which is yet _another_ factor of two. So we should maybe go with a factor of 340:1 rather than 170:1. And larger configurations are available from places like Sun, etc. So we could make that 1,000:1 if you want..


You are talking yourself into one hell of a deep hole here. I believe that with 340:1 time odds, we could take an old buggy gnuchess and give Rybka absolute fits...

I have not studied this much, but I started a test earlier today giving glaurung 1+1, and crafty 100+100 for the time controls. I completed 450 games before quitting, and the result was one draw, the rest wins. with 450 games, if crafty was less than 600 Elo better than Glaurung 2, I would have expected about 1 loss out of every 64 games. For 800 I would expect 1 loss for every 256 games. one loss out of 900 games gives an idea of just what 100:1 does... I'm not sure this experiment will really be that interesting. And considering it should be either 170 or 340 depending on which hardware we consider, it only gets worse.

I have not checked the SSDF to see where G2 (most recent version) compares to Rybka. But it had better be at least 600 worse or this is not going to be so interesting IMHO.

But I do think it would be interesting to assess Elo gain for hardware vs for software, just so we would know. I know the programmers want to take credit for most of the gains. But I'll still bet that the engineers are responsible for the biggest gain...
I do not agree with the another factor of 2 that you mention because
you could get a quad and not a dual in 1998.
Yes you could, but not a quad P2/300. You could get a quad pentium pro 200, but that was no faster than the dual P2/300.

I also do not agree that today you can easily buy a quad core-3 box with 16 cores.
This is one of those times where I really don't care what _you_ believe. I care about what I _know_ to be true. You can order 'em from Dell. From IBM. From Sun. The list goes on and on and on.

testing organizations like CCRL do not use more than a quad and based on the posts that I read I see only few people who use an octal(8 cores and not 16 cores).
Again, who cares? I ran on 8-processor machines 7-8 years ago in CCT events. I don't care what "a few people or a lot of people" use. We are talking about what is available now vs then. I have _already_ suggested limiting this to single-chip, because today there are larger multiple-chip motherboards around than there were in 1998. To be absolutely fair and correct we should take the very best available in 1998 against the very best that is available today, where the margin is more like 1000:1


It seems to me that 4 cores was top hardware of 1998-1999 when 16 cores is top hardware of 2008-2009 and I think that most software of 1998-1999 could not use 16 cores efficiently.
Exactly what are you basing that on? "top" -> "best". "best" is much greater than 16 cores. And again, I don't care what "most software" of 1998 could do... I know what mine could and does do, which eliminates any speculation.



For your comment:
"And larger configurations are available from places like Sun, etc. So we could make that 1,000:1 if you want"

In this case you should also use some super computer also for 1998-1999
software(remember that deeper blue is from 1997 and I doubt if you will find today something that is even 100 times faster)

Uri
"commercially available hardware". Nothing more, nothing less...
BubbaTough
Posts: 1154
Joined: Fri Jun 23, 2006 5:18 am

Re: Hardware vs Software

Post by BubbaTough »

bob wrote:
BubbaTough wrote:
bob wrote:
BubbaTough wrote:
bob wrote: I will accept that a program today running on 4 cores will see some overhead due to the parallel search. But I don't think it is worth arguing about whether we should scale back the speed because of the overhead. That is simply a software issue as well, as it is theoretically possible to have very little overhead. If the software can't quite use the computing power available, that is a software problem, not a hardware limit.
Hmmm. I think there is a big difference between 50x speedup on 4 processors, and 200x on 1. Blaming software for not overcoming alpha-beta inefficiencies in utilizing multiple processors efficiently seems tangential. The fact is if you take an old program and put it on new hardware, it does not get 200x faster because it also cannot take advantage of the extra processors perfectly.

-Sam
The question is "what has hardware done" and "what has software done"???

On 4 cpus, the last time this was carefully measured and discussed here, I ran a _ton_ of tests and put the results on my ftp box. Martin F. took the data and computed a raw speedup of 3.4x on that data even though I usually claim 3.1x in a general case. The hardware of today, in terms of instructions per second is about 200x better than in 1998. That seems to be the only reasonable assumption one can work with. earlier it seemed everyone was convinced that the hardware improvement was _far_ smaller than the software improvement. Now we have to quibble about parallel search overhead?

:)

Can't have it both ways. At worst this is a 25% error. More likely it is a 10-12% error. Do you believe that is significant when comparing hardware vs software???
I don't really have a position, and don't really want anything one way let alone two...and yes I was just quibbling. Your implication that software may be keeping up with hardware over that period time is impressive (even though you phrased it the other way around). After all, hardware is doubling capability every X years, the idea that software is also improving chess performance exponentially over that long a period is truly a testament to the incredible improvements that have been made in software. And given how immature chess programs still are, I see no reason for them not to continue this amazing level of achievement. Truly a fun area to work/play in.

-Sam
I think you misunderstood what I have been saying. I do not believe that software has come even _close_ to keeping up with hardware, in terms of Elo improvement. If I were guessing, I would suspect a 2/3 - 1/3 split and that is probably optimistic. That is, if you assume that over the past 10 years programs have gained +600 Elo (not my number, someone else made that statement) then I believe that 400 came from hardware, 200 from software.

I have posted bits and pieces suggesting this over the past few months. Some think that ideas like null-move are worth +200. It is not. LMR is worth 200. It is not. They are certainly all worth something. But not nearly what everyone believes. I know programmers don't want to hear that, but as a more impartial observer (even though I am obviously a long-time chess programmer) I am well aware of what hardware has done, having watched it from the late 60's to date...
Ahh, now I understand your thesis, I understand why all your numbers and premises are slanted in the same direction :). In very few fields has software improvement over that length of time even come close to a 1/3 - 2/3 ratio with hardware unless it is simply a single AHA concept (which has not been the case in chess). So even if everything you are saying is completely accurate, and everyone else posting is wrong in challenging your assumptions, it is still an amazing testimony to the improvements in chess programming over that years.

-Sam
User avatar
Zach Wegner
Posts: 1922
Joined: Thu Mar 09, 2006 12:51 am
Location: Earth

Re: Hardware vs Software

Post by Zach Wegner »

bob wrote:
Uri Blass wrote:I also do not agree that today you can easily buy a quad core-3 box with 16 cores.
This is one of those times where I really don't care what _you_ believe. I care about what I _know_ to be true. You can order 'em from Dell. From IBM. From Sun. The list goes on and on and on.
I was pretty sure that these aren't available yet (NDA'd apparently). I was looking last week or so, and Intel hasn't released an i7 that supports multi-socket motherboards. Supposedly they will create a new socket type, like the core2's 771/775 (i7 is now 1366). If you know a link where one is available, it would be appreciated...
bob
Posts: 20943
Joined: Mon Feb 27, 2006 7:30 pm
Location: Birmingham, AL

Re: Hardware vs Software

Post by bob »

Zach Wegner wrote:
bob wrote:
Uri Blass wrote:I also do not agree that today you can easily buy a quad core-3 box with 16 cores.
This is one of those times where I really don't care what _you_ believe. I care about what I _know_ to be true. You can order 'em from Dell. From IBM. From Sun. The list goes on and on and on.
I was pretty sure that these aren't available yet (NDA'd apparently). I was looking last week or so, and Intel hasn't released an i7 that supports multi-socket motherboards. Supposedly they will create a new socket type, like the core2's 771/775 (i7 is now 1366). If you know a link where one is available, it would be appreciated...
I wasn't particularly talking about quad I7's as there are only two so far and they are pretty new. But quad chip quad cores have been around a long time. And even 8 chip quad cores from Sun, et al...

Again, since I had originally suggested a PII/300 and I7 as late 1998 and late 2008 a single-chip is the most direct comparison staying away from exotica...