Hardware vs Software

Discussion of chess software programming and technical issues.

Moderator: Ras

bob
Posts: 20943
Joined: Mon Feb 27, 2006 7:30 pm
Location: Birmingham, AL

Re: Hardware vs Software

Post by bob »

diep wrote:
bhlangonijr wrote:Very interesting thread.

Let us start from the beginning. I'm wondering if there is a scientific method to verify this contest.

How can we quantify the "gain function" from benchmarking the chess playing agent (x hardware and y software) in order to determine which factor had the major influence over the playing strength? Just computing ELO gains in a kind of tournament?

Is the approach suggested by H.G. Muller sufficient to verify this?

Could we emulate the old hardware playing the engines with time/resource handicap?

Perhaps this question is more complicated than suggests the simplicity of its statement. As Bob mentioned, old software is tuned to old hardware. This way it is not accurate to benchmark playing the old software on today hardware and so on. In other words, maybe we should take in account testing the subjacent idea of a given software from a given year rather than really testing its implemented version.

Whatever is takes, my bet is on the hardware.
Whatever you try or do, measuring the improvement of software versus hardware is going to be very difficult, because the improvement is not a constnt, but dependant upon which search depth you use and which testing method.

Note that nullmove was applied after 1995, after Fritz became world champion with recursive nullmove. The generic publication from Donninger in 1991 didn't trigger anyone yet to use it. It was Frans Morsch world title that woke up everyone.
That's inaccurate. Null-move was first discussed by Don Beal, "Selective search without tears". Published in 1986. The 1989 WCCC version of Cray Blitz used null-move, non-recursively, R=1. The Singular Extension paper by Hsu/Campbell also mentions null-move search. Fritz was far from the first, and the Donninger paper was not the one that got most of the early users going. Fritz pushed R=2 however. But even recursive null-move was in the version of Cray Blitz that has been made available through Carey and his historic computer chess program web site. We just used R=1 back then. Fritz got everyone to using R=2.

In 2000 i tried a test. Diep end 2000, after some major evaluation improvements (long after world champs) against software from 1997-1998.

This at a number of machines from Jan Louwman. Equal hardware in short.
Similar books were used, so the 1998 software didn't have that disadvantage. In fact the book i used for diep to test with in 2000 was actually worse than what these engines had in 1997.

The 1997-1998 hardware got so totally annihilated that even i was in shock.

1999 was a year of big progress from most engines. That same progress most top engines made in 1999 i made in 2000 with it. Improvements in endgame especially it was. The search depth win i already had made in 1998, as diep was probably worlds first engine to use R=3 in nullmove, in fact i combined R=3 with R=2 and R=1. I posted that onto RGCC at the time. That resulted in a publication of Ernst A Heinz called: "adaptive nullmove".

Testing against too old software doesn't even make sense. A big problem is always that software is tuned to the hardware the programmer can get his hands on at home (regardless of which hardware you show up with in world champs).

Testing against old engines or weak engines DOES make sense in itself of course, but for the purpose of determining the elowin from software versus hardware as that is not so clear.

Nullmove was at the time a quantum leap in elo.
It is not a quantum leap. I played some 32,000 game matches and posted the results here a while back. Removing Null-move _and_ LMR costs about 100 points Elo or so (I don't remember the exact numbers, but they can be found in a thread from late last year here)

The big problem with software is: "what do you want to measure with it?"

Another second big problem is that software from start of 80s getting 1 or 2 ply total search depth, versus todays engines when giving them 1 or 2 ply, that the elodifference will be at least 1500 elopoints in the advantage of todays engines. Is it a fair compare however if you realize back then those engines were in assembler and some dedicated computers ran at 1Mhz processors with like 2 KB or 4 KB ram and further some ROM.

Todays tables are all together a lot of data.

Also it is questionable whether cpu's from back then could have executed diep's huge evaluation function of back then even a single time.

Search inefficiency was huge. Deep Thought could get like half a million nps and searched 8 ply with it, winning the world title in 1988. Just for not having big bugs in search unlike other supercomputers. If you see those games and get the same search depth with todays software like supercomputers of back then, first of all you can do it in a lot less nodes, secondly they blundered entire pieces and allowed mates.

Cray Blitz blundered a piece against Deep Blue allowing a simple tactic, even worse is Zugzwangs bug to grab a rook, allowing it to get mated in 7.

My poor diep out of 1994 at a 486dx2 already didn't have problems avoiding that mate.

The bugs in search from those days you should factor in. They simply gave away pieces and even at todays hardware, most likely the software from those days would crash when getting that much nps.

Even at todays hardware software from back then would also do worse against human players today. That is because a 2300 FIDE rated guy from today is a lot better than he was in 1990.

Another big bug in software from the 90s, crafty excepted, is the branching factor. Schach 3.0 i have run not so long ago at a modern processor. A P3 laptop 1Ghz. After 24 hours it still hasn't reached 12 ply.

That is with nullmove with singular extensions with recapture extensions and so on.

Branching factors of over 10.0 and then exploding above 10-12 ply in branching factor because of extension gets triggered upon extension.

Most software just goes to a hashtable of 32MB or so from back then as that was the DOS limit. It is 16 bits so even if todays hardware would run DOS, it is going to be ugly slow for that software relatively spoken.

Most engines used 16 bits counters to count the nps. That was in fact luxury already.

So software that "by accident" has a good branching factor from back, it rocks.

I remember how Diep was one of the few programs to speedup bigtime when moving from P5-100Mhz to pentiumpro 200Mhz.

The P5-133 was exactly factor 3 slower than the p6-200 at the time, because diep used a combination of 32 bits code with 8 bits code.

All the assembler engines were 8 bits code with 16 bits code.
Took years for some to convert to 32 bits.

The next problem you'll have is the limit of 750 elopoints.

Rounding off elo with the default k factor means that at 750 elopoints difference you have a 100% score difference.

So you simply can't measure more than 750 elopoints. That is a major problem.

If you'd give software of 80s for example todays hardware, like psion 1.0, and cyrus you will simply win with 100% scores which gives 750 elo points difference, but it will be a lot more.

It is not fair to do this compare IMHO.

Newer hardware not only allows new algorithms and new chessknowledge, it also means your tuning is different.

If we would start a competition now at 4KB ram and 64KB rom, who wins? I bet at the dutch team by the way in that case :)

Vincent
User avatar
michiguel
Posts: 6401
Joined: Thu Mar 09, 2006 8:30 pm
Location: Chicago, Illinois, USA

Re: Hardware vs Software

Post by michiguel »

bob wrote:
diep wrote:
bhlangonijr wrote:Very interesting thread.

Let us start from the beginning. I'm wondering if there is a scientific method to verify this contest.

How can we quantify the "gain function" from benchmarking the chess playing agent (x hardware and y software) in order to determine which factor had the major influence over the playing strength? Just computing ELO gains in a kind of tournament?

Is the approach suggested by H.G. Muller sufficient to verify this?

Could we emulate the old hardware playing the engines with time/resource handicap?

Perhaps this question is more complicated than suggests the simplicity of its statement. As Bob mentioned, old software is tuned to old hardware. This way it is not accurate to benchmark playing the old software on today hardware and so on. In other words, maybe we should take in account testing the subjacent idea of a given software from a given year rather than really testing its implemented version.

Whatever is takes, my bet is on the hardware.
Whatever you try or do, measuring the improvement of software versus hardware is going to be very difficult, because the improvement is not a constnt, but dependant upon which search depth you use and which testing method.

Note that nullmove was applied after 1995, after Fritz became world champion with recursive nullmove. The generic publication from Donninger in 1991 didn't trigger anyone yet to use it. It was Frans Morsch world title that woke up everyone.
That's inaccurate. Null-move was first discussed by Don Beal, "Selective search without tears". Published in 1986. The 1989 WCCC version of Cray Blitz used null-move, non-recursively, R=1. The Singular Extension paper by Hsu/Campbell also mentions null-move search. Fritz was far from the first, and the Donninger paper was not the one that got most of the early users going. Fritz pushed R=2 however. But even recursive null-move was in the version of Cray Blitz that has been made available through Carey and his historic computer chess program web site. We just used R=1 back then. Fritz got everyone to using R=2.

In 2000 i tried a test. Diep end 2000, after some major evaluation improvements (long after world champs) against software from 1997-1998.

This at a number of machines from Jan Louwman. Equal hardware in short.
Similar books were used, so the 1998 software didn't have that disadvantage. In fact the book i used for diep to test with in 2000 was actually worse than what these engines had in 1997.

The 1997-1998 hardware got so totally annihilated that even i was in shock.

1999 was a year of big progress from most engines. That same progress most top engines made in 1999 i made in 2000 with it. Improvements in endgame especially it was. The search depth win i already had made in 1998, as diep was probably worlds first engine to use R=3 in nullmove, in fact i combined R=3 with R=2 and R=1. I posted that onto RGCC at the time. That resulted in a publication of Ernst A Heinz called: "adaptive nullmove".

Testing against too old software doesn't even make sense. A big problem is always that software is tuned to the hardware the programmer can get his hands on at home (regardless of which hardware you show up with in world champs).

Testing against old engines or weak engines DOES make sense in itself of course, but for the purpose of determining the elowin from software versus hardware as that is not so clear.

Nullmove was at the time a quantum leap in elo.
It is not a quantum leap. I played some 32,000 game matches and posted the results here a while back. Removing Null-move _and_ LMR costs about 100 points Elo or so (I don't remember the exact numbers, but they can be found in a thread from late last year here)
I do not think that is a good demonstration because you remove nullmove from a modern strong engine. The addition or subtraction of nullmove to a more primitive engine may give a bigger difference. I recently dumbed down fruit 1.0 removing null move and it look like it lost ~240 elo points. I did not play so many games and it was not a thorough test, but I doubt the difference is only 100 points. All this may be engine dependent and what else you do in your search.

Miguel


The big problem with software is: "what do you want to measure with it?"

Another second big problem is that software from start of 80s getting 1 or 2 ply total search depth, versus todays engines when giving them 1 or 2 ply, that the elodifference will be at least 1500 elopoints in the advantage of todays engines. Is it a fair compare however if you realize back then those engines were in assembler and some dedicated computers ran at 1Mhz processors with like 2 KB or 4 KB ram and further some ROM.

Todays tables are all together a lot of data.

Also it is questionable whether cpu's from back then could have executed diep's huge evaluation function of back then even a single time.

Search inefficiency was huge. Deep Thought could get like half a million nps and searched 8 ply with it, winning the world title in 1988. Just for not having big bugs in search unlike other supercomputers. If you see those games and get the same search depth with todays software like supercomputers of back then, first of all you can do it in a lot less nodes, secondly they blundered entire pieces and allowed mates.

Cray Blitz blundered a piece against Deep Blue allowing a simple tactic, even worse is Zugzwangs bug to grab a rook, allowing it to get mated in 7.

My poor diep out of 1994 at a 486dx2 already didn't have problems avoiding that mate.

The bugs in search from those days you should factor in. They simply gave away pieces and even at todays hardware, most likely the software from those days would crash when getting that much nps.

Even at todays hardware software from back then would also do worse against human players today. That is because a 2300 FIDE rated guy from today is a lot better than he was in 1990.

Another big bug in software from the 90s, crafty excepted, is the branching factor. Schach 3.0 i have run not so long ago at a modern processor. A P3 laptop 1Ghz. After 24 hours it still hasn't reached 12 ply.

That is with nullmove with singular extensions with recapture extensions and so on.

Branching factors of over 10.0 and then exploding above 10-12 ply in branching factor because of extension gets triggered upon extension.

Most software just goes to a hashtable of 32MB or so from back then as that was the DOS limit. It is 16 bits so even if todays hardware would run DOS, it is going to be ugly slow for that software relatively spoken.

Most engines used 16 bits counters to count the nps. That was in fact luxury already.

So software that "by accident" has a good branching factor from back, it rocks.

I remember how Diep was one of the few programs to speedup bigtime when moving from P5-100Mhz to pentiumpro 200Mhz.

The P5-133 was exactly factor 3 slower than the p6-200 at the time, because diep used a combination of 32 bits code with 8 bits code.

All the assembler engines were 8 bits code with 16 bits code.
Took years for some to convert to 32 bits.

The next problem you'll have is the limit of 750 elopoints.

Rounding off elo with the default k factor means that at 750 elopoints difference you have a 100% score difference.

So you simply can't measure more than 750 elopoints. That is a major problem.

If you'd give software of 80s for example todays hardware, like psion 1.0, and cyrus you will simply win with 100% scores which gives 750 elo points difference, but it will be a lot more.

It is not fair to do this compare IMHO.

Newer hardware not only allows new algorithms and new chessknowledge, it also means your tuning is different.

If we would start a competition now at 4KB ram and 64KB rom, who wins? I bet at the dutch team by the way in that case :)

Vincent
Uri Blass
Posts: 10786
Joined: Thu Mar 09, 2006 12:37 am
Location: Tel-Aviv Israel

Re: Hardware vs Software

Post by Uri Blass »

michiguel wrote:
bob wrote:
It is not a quantum leap. I played some 32,000 game matches and posted the results here a while back. Removing Null-move _and_ LMR costs about 100 points Elo or so (I don't remember the exact numbers, but they can be found in a thread from late last year here)
I do not think that is a good demonstration because you remove nullmove from a modern strong engine. The addition or subtraction of nullmove to a more primitive engine may give a bigger difference. I recently dumbed down fruit 1.0 removing null move and it look like it lost ~240 elo points. I did not play so many games and it was not a thorough test, but I doubt the difference is only 100 points. All this may be engine dependent and what else you do in your search.

Miguel
some points:
1)I remember that for Crafty the value of LMR was 40 elo when the value of
LMR+null move was 120 elo so it means that the value of null move for Crafty without LMR is 80 elo.

2)I wonder if null move help more to programs with bad evaluation.
I can explain a reason why this result is not expected.

null move is pruning moves with no threats.
If your evaluation is good then you can expect more often to detect threats correctly so the pruning is going to have less mistakes relative to the case that your evaluation is bad.

Uri
bob
Posts: 20943
Joined: Mon Feb 27, 2006 7:30 pm
Location: Birmingham, AL

Re: Hardware vs Software

Post by bob »

michiguel wrote:
bob wrote:
diep wrote:
bhlangonijr wrote:Very interesting thread.

Let us start from the beginning. I'm wondering if there is a scientific method to verify this contest.

How can we quantify the "gain function" from benchmarking the chess playing agent (x hardware and y software) in order to determine which factor had the major influence over the playing strength? Just computing ELO gains in a kind of tournament?

Is the approach suggested by H.G. Muller sufficient to verify this?

Could we emulate the old hardware playing the engines with time/resource handicap?

Perhaps this question is more complicated than suggests the simplicity of its statement. As Bob mentioned, old software is tuned to old hardware. This way it is not accurate to benchmark playing the old software on today hardware and so on. In other words, maybe we should take in account testing the subjacent idea of a given software from a given year rather than really testing its implemented version.

Whatever is takes, my bet is on the hardware.
Whatever you try or do, measuring the improvement of software versus hardware is going to be very difficult, because the improvement is not a constnt, but dependant upon which search depth you use and which testing method.

Note that nullmove was applied after 1995, after Fritz became world champion with recursive nullmove. The generic publication from Donninger in 1991 didn't trigger anyone yet to use it. It was Frans Morsch world title that woke up everyone.
That's inaccurate. Null-move was first discussed by Don Beal, "Selective search without tears". Published in 1986. The 1989 WCCC version of Cray Blitz used null-move, non-recursively, R=1. The Singular Extension paper by Hsu/Campbell also mentions null-move search. Fritz was far from the first, and the Donninger paper was not the one that got most of the early users going. Fritz pushed R=2 however. But even recursive null-move was in the version of Cray Blitz that has been made available through Carey and his historic computer chess program web site. We just used R=1 back then. Fritz got everyone to using R=2.

In 2000 i tried a test. Diep end 2000, after some major evaluation improvements (long after world champs) against software from 1997-1998.

This at a number of machines from Jan Louwman. Equal hardware in short.
Similar books were used, so the 1998 software didn't have that disadvantage. In fact the book i used for diep to test with in 2000 was actually worse than what these engines had in 1997.

The 1997-1998 hardware got so totally annihilated that even i was in shock.

1999 was a year of big progress from most engines. That same progress most top engines made in 1999 i made in 2000 with it. Improvements in endgame especially it was. The search depth win i already had made in 1998, as diep was probably worlds first engine to use R=3 in nullmove, in fact i combined R=3 with R=2 and R=1. I posted that onto RGCC at the time. That resulted in a publication of Ernst A Heinz called: "adaptive nullmove".

Testing against too old software doesn't even make sense. A big problem is always that software is tuned to the hardware the programmer can get his hands on at home (regardless of which hardware you show up with in world champs).

Testing against old engines or weak engines DOES make sense in itself of course, but for the purpose of determining the elowin from software versus hardware as that is not so clear.

Nullmove was at the time a quantum leap in elo.
It is not a quantum leap. I played some 32,000 game matches and posted the results here a while back. Removing Null-move _and_ LMR costs about 100 points Elo or so (I don't remember the exact numbers, but they can be found in a thread from late last year here)
I do not think that is a good demonstration because you remove nullmove from a modern strong engine. The addition or subtraction of nullmove to a more primitive engine may give a bigger difference. I recently dumbed down fruit 1.0 removing null move and it look like it lost ~240 elo points. I did not play so many games and it was not a thorough test, but I doubt the difference is only 100 points. All this may be engine dependent and what else you do in your search.

Miguel

I simply did the best I could do. In today's Crafty, which I know for certain is not that far different from the published source of Cray Blitz in terms of general structure and search, removing just null-move by itself was maybe 40 Elo. Removing just LMR was the same. But removing both dropped the Elo by around 100. Since there is overlap (both reduce the depth of bad lines) I think that one will cover for the other if you just move one, at least partially.

I can't speak for every program running. But anyone can look at what I do in Crafty, and with a good bit of effort, compare it to the released Cray Blitz source which, at best, was a 1991-1992 era program, and the similarities are significant. Normal alpha/beta. PVS. Aspiration. Hashing. Killer moves. The main difference is that one is 100% bitboard, while Cray Blitz was a partial bitboard program. Evaluations are not _that_ different. There are problably things in one that are not in the other and vice-versa, but the overall ideas (pawn structure, passed pawns, piece centralization, king safety, etc) are present in both.

The only additions we have in Crafty, and there are not that many, really rely on fast hardware. We were not doing mobility in Cray Blitz until 1993 or so, when we figured out how to use the vector hardware to do it with very low cost. We are doing mobility in Crafty, and even weighting squares differently for the mobility calculation.

We could always quibble about whether removing something hurts one program more than another. But in the case of null-move, I'm convinced that the numbers I published are accurate, because they were based on 32K game matches, not just on a hundred games or so. Null-move is not magical, it has its advantages, but it _also_ has some significant costs that get overlooked by many...




The big problem with software is: "what do you want to measure with it?"

Another second big problem is that software from start of 80s getting 1 or 2 ply total search depth, versus todays engines when giving them 1 or 2 ply, that the elodifference will be at least 1500 elopoints in the advantage of todays engines. Is it a fair compare however if you realize back then those engines were in assembler and some dedicated computers ran at 1Mhz processors with like 2 KB or 4 KB ram and further some ROM.

Todays tables are all together a lot of data.

Also it is questionable whether cpu's from back then could have executed diep's huge evaluation function of back then even a single time.

Search inefficiency was huge. Deep Thought could get like half a million nps and searched 8 ply with it, winning the world title in 1988. Just for not having big bugs in search unlike other supercomputers. If you see those games and get the same search depth with todays software like supercomputers of back then, first of all you can do it in a lot less nodes, secondly they blundered entire pieces and allowed mates.

Cray Blitz blundered a piece against Deep Blue allowing a simple tactic, even worse is Zugzwangs bug to grab a rook, allowing it to get mated in 7.

My poor diep out of 1994 at a 486dx2 already didn't have problems avoiding that mate.

The bugs in search from those days you should factor in. They simply gave away pieces and even at todays hardware, most likely the software from those days would crash when getting that much nps.

Even at todays hardware software from back then would also do worse against human players today. That is because a 2300 FIDE rated guy from today is a lot better than he was in 1990.

Another big bug in software from the 90s, crafty excepted, is the branching factor. Schach 3.0 i have run not so long ago at a modern processor. A P3 laptop 1Ghz. After 24 hours it still hasn't reached 12 ply.

That is with nullmove with singular extensions with recapture extensions and so on.

Branching factors of over 10.0 and then exploding above 10-12 ply in branching factor because of extension gets triggered upon extension.

Most software just goes to a hashtable of 32MB or so from back then as that was the DOS limit. It is 16 bits so even if todays hardware would run DOS, it is going to be ugly slow for that software relatively spoken.

Most engines used 16 bits counters to count the nps. That was in fact luxury already.

So software that "by accident" has a good branching factor from back, it rocks.

I remember how Diep was one of the few programs to speedup bigtime when moving from P5-100Mhz to pentiumpro 200Mhz.

The P5-133 was exactly factor 3 slower than the p6-200 at the time, because diep used a combination of 32 bits code with 8 bits code.

All the assembler engines were 8 bits code with 16 bits code.
Took years for some to convert to 32 bits.

The next problem you'll have is the limit of 750 elopoints.

Rounding off elo with the default k factor means that at 750 elopoints difference you have a 100% score difference.

So you simply can't measure more than 750 elopoints. That is a major problem.

If you'd give software of 80s for example todays hardware, like psion 1.0, and cyrus you will simply win with 100% scores which gives 750 elo points difference, but it will be a lot more.

It is not fair to do this compare IMHO.

Newer hardware not only allows new algorithms and new chessknowledge, it also means your tuning is different.

If we would start a competition now at 4KB ram and 64KB rom, who wins? I bet at the dutch team by the way in that case :)

Vincent
bob
Posts: 20943
Joined: Mon Feb 27, 2006 7:30 pm
Location: Birmingham, AL

Re: Hardware vs Software

Post by bob »

Uri Blass wrote:
michiguel wrote:
bob wrote:
It is not a quantum leap. I played some 32,000 game matches and posted the results here a while back. Removing Null-move _and_ LMR costs about 100 points Elo or so (I don't remember the exact numbers, but they can be found in a thread from late last year here)
I do not think that is a good demonstration because you remove nullmove from a modern strong engine. The addition or subtraction of nullmove to a more primitive engine may give a bigger difference. I recently dumbed down fruit 1.0 removing null move and it look like it lost ~240 elo points. I did not play so many games and it was not a thorough test, but I doubt the difference is only 100 points. All this may be engine dependent and what else you do in your search.

Miguel
some points:
1)I remember that for Crafty the value of LMR was 40 elo when the value of
LMR+null move was 120 elo so it means that the value of null move for Crafty without LMR is 80 elo.
Also remember that the inverse was true. Keeping null-move and removing LMR was also 40 Elo. So either one, by itself is worth maybe 80. And both together are 120. I used null-move R=1 at the 1989 WCCC event. Burt Wendroff sent me two papers to look at, one published by Don Beal (null move) and one a work-in-progress by Hsu/Campbell on singular extensions). We both looked at them and concluded that null-move would be easy to add (this happened about a month in front of the 1989 WCCC event in Alberta) while singular extensions would likely introduce way too many bugs to be usable in such a short period of time.

So in our 1998 vs 2008 comparison, null move, which is often considered one of the most revolutionary ideas in computer chess (after minimax and then alpha/beta minimax) was already in use. Don Beal was using it in 1986 or so, so even going back 20 years the idea was present.

2)I wonder if null move help more to programs with bad evaluation.
I can explain a reason why this result is not expected.

null move is pruning moves with no threats.
If your evaluation is good then you can expect more often to detect threats correctly so the pruning is going to have less mistakes relative to the case that your evaluation is bad.

Uri
Doesn't quite make sense to me since my evaluation really does not detect "threats" in the usual sense. And my evaluation today is fairly similar to what we were doing in 1998 Crafty and even in 1991 Cray Blitz...
Gerd Isenberg
Posts: 2251
Joined: Wed Mar 08, 2006 8:47 pm
Location: Hattingen, Germany

Re: Hardware vs Software

Post by Gerd Isenberg »

bob wrote: Also remember that the inverse was true. Keeping null-move and removing LMR was also 40 Elo. So either one, by itself is worth maybe 80. And both together are 120. I used null-move R=1 at the 1989 WCCC event. Burt Wendroff sent me two papers to look at, one published by Don Beal (null move) and one a work-in-progress by Hsu/Campbell on singular extensions). We both looked at them and concluded that null-move would be easy to add (this happened about a month in front of the 1989 WCCC event in Alberta) while singular extensions would likely introduce way too many bugs to be usable in such a short period of time.

So in our 1998 vs 2008 comparison, null move, which is often considered one of the most revolutionary ideas in computer chess (after minimax and then alpha/beta minimax) was already in use. Don Beal was using it in 1986 or so, so even going back 20 years the idea was present.
Adelson-Velskiy, G.M., Arlazarov, V.L., and Donskoy, M.V. (1975). Some Methods of Controlling the Tree Search in Chess Programs. Artificial Ingelligence, Vol. 6, No. 4, pp. 361-371. ISSN 0004-3702. Reprinted (1988) in Computer Chess Compendium (ed. D.N.L. Levy) pp. 129-135. B.T. Batsford, London 1988, ISBN 0-387-91331-9
2. The Order of Move Considerations:
A less trivial idea was that sometimes an extension of the game tree by introducing of dummy move can lead to a reduction of the search tree. In positions with material advantage (with respect to limits) it was permitted to try a so-called "blank" move in which ones own pieces are not moved. Certainly in positions with "Zugzwang" (the side to move must weaken his position) this may lead to errors.

However, the standard of the program's play is still such that such errors do not essentially lower it. (We plan to develop in the future a procedure for recognizing "Zugzwang" positions, and to forbid a blank move in them.) The reduction of search takes place in this case, because the blank move proves to be a closing one and furthermore does not generate complex forcing variations.
diep
Posts: 1822
Joined: Thu Mar 09, 2006 11:54 pm
Location: The Netherlands

Re: Hardware vs Software

Post by diep »

Gerd Isenberg wrote:
bob wrote: Also remember that the inverse was true. Keeping null-move and removing LMR was also 40 Elo. So either one, by itself is worth maybe 80. And both together are 120. I used null-move R=1 at the 1989 WCCC event. Burt Wendroff sent me two papers to look at, one published by Don Beal (null move) and one a work-in-progress by Hsu/Campbell on singular extensions). We both looked at them and concluded that null-move would be easy to add (this happened about a month in front of the 1989 WCCC event in Alberta) while singular extensions would likely introduce way too many bugs to be usable in such a short period of time.

So in our 1998 vs 2008 comparison, null move, which is often considered one of the most revolutionary ideas in computer chess (after minimax and then alpha/beta minimax) was already in use. Don Beal was using it in 1986 or so, so even going back 20 years the idea was present.
Adelson-Velskiy, G.M., Arlazarov, V.L., and Donskoy, M.V. (1975). Some Methods of Controlling the Tree Search in Chess Programs. Artificial Ingelligence, Vol. 6, No. 4, pp. 361-371. ISSN 0004-3702. Reprinted (1988) in Computer Chess Compendium (ed. D.N.L. Levy) pp. 129-135. B.T. Batsford, London 1988, ISBN 0-387-91331-9
2. The Order of Move Considerations:
A less trivial idea was that sometimes an extension of the game tree by introducing of dummy move can lead to a reduction of the search tree. In positions with material advantage (with respect to limits) it was permitted to try a so-called "blank" move in which ones own pieces are not moved. Certainly in positions with "Zugzwang" (the side to move must weaken his position) this may lead to errors.

However, the standard of the program's play is still such that such errors do not essentially lower it. (We plan to develop in the future a procedure for recognizing "Zugzwang" positions, and to forbid a blank move in them.) The reduction of search takes place in this case, because the blank move proves to be a closing one and furthermore does not generate complex forcing variations.
That is all irrelevant. I have a paper from Paradise 1979 here and you could explain that as being nullmove also.

Frans Morsch uses recursive nullmove, gets world champion with it, speaks it out loud, so that is very convincing proof and marketing of it.

Bob mentionned before doing non-recursive R=1 nullmove. Claiming in 2009 he also did do it recursive i'm going to ignore, as in 1994 i was checking the internet at a daily base and haven't seen any evidence nor discussion nor claim about who invented what. Note that nonrecursive nullmove already was implemented in a commercial chess program in the start of the 80s. Amazingly this programmer never has claimed the invention, though he kicked butt with it.

As for elowin: 100 points win for nullmove in todays software is really a lot. I call that a quantumleap forward. Back then at the 4 ply search depths or so it was worth even more of course.

1 ply extra was so so important in these days. Getting outsearched just 1 ply already was deadly start of 90s.

There were discussions about me claiming a good evaluation function mattered these days. The argument against it of some was: "it might cost 2 ply, that might be too expensive".

2 ply search depth difference is no big deal nowadays.

Vincent
bob
Posts: 20943
Joined: Mon Feb 27, 2006 7:30 pm
Location: Birmingham, AL

Re: Hardware vs Software

Post by bob »

diep wrote:
Gerd Isenberg wrote:
bob wrote: Also remember that the inverse was true. Keeping null-move and removing LMR was also 40 Elo. So either one, by itself is worth maybe 80. And both together are 120. I used null-move R=1 at the 1989 WCCC event. Burt Wendroff sent me two papers to look at, one published by Don Beal (null move) and one a work-in-progress by Hsu/Campbell on singular extensions). We both looked at them and concluded that null-move would be easy to add (this happened about a month in front of the 1989 WCCC event in Alberta) while singular extensions would likely introduce way too many bugs to be usable in such a short period of time.

So in our 1998 vs 2008 comparison, null move, which is often considered one of the most revolutionary ideas in computer chess (after minimax and then alpha/beta minimax) was already in use. Don Beal was using it in 1986 or so, so even going back 20 years the idea was present.
Adelson-Velskiy, G.M., Arlazarov, V.L., and Donskoy, M.V. (1975). Some Methods of Controlling the Tree Search in Chess Programs. Artificial Ingelligence, Vol. 6, No. 4, pp. 361-371. ISSN 0004-3702. Reprinted (1988) in Computer Chess Compendium (ed. D.N.L. Levy) pp. 129-135. B.T. Batsford, London 1988, ISBN 0-387-91331-9
2. The Order of Move Considerations:
A less trivial idea was that sometimes an extension of the game tree by introducing of dummy move can lead to a reduction of the search tree. In positions with material advantage (with respect to limits) it was permitted to try a so-called "blank" move in which ones own pieces are not moved. Certainly in positions with "Zugzwang" (the side to move must weaken his position) this may lead to errors.

However, the standard of the program's play is still such that such errors do not essentially lower it. (We plan to develop in the future a procedure for recognizing "Zugzwang" positions, and to forbid a blank move in them.) The reduction of search takes place in this case, because the blank move proves to be a closing one and furthermore does not generate complex forcing variations.
That is all irrelevant. I have a paper from Paradise 1979 here and you could explain that as being nullmove also.

Frans Morsch uses recursive nullmove, gets world champion with it, speaks it out loud, so that is very convincing proof and marketing of it.

Bob mentionned before doing non-recursive R=1 nullmove. Claiming in 2009 he also did do it recursive i'm going to ignore, as in 1994 i was checking the internet at a daily base and haven't seen any evidence nor discussion nor claim about who invented what. Note that nonrecursive nullmove already was implemented in a commercial chess program in the start of the 80s. Amazingly this programmer never has claimed the invention, though he kicked butt with it.

As for elowin: 100 points win for nullmove in todays software is really a lot. I call that a quantumleap forward. Back then at the 4 ply search depths or so it was worth even more of course.

1 ply extra was so so important in these days. Getting outsearched just 1 ply already was deadly start of 90s.

There were discussions about me claiming a good evaluation function mattered these days. The argument against it of some was: "it might cost 2 ply, that might be too expensive".

2 ply search depth difference is no big deal nowadays.

Vincent
You can ignore whatever you want. When I _started_ using null move, I used R=1, and allowed just one null-move in any path. Murray Campbell and Hsu discussed recursive (but not two successive) null-moves, using R=1 in their paper on singular extensions. They also discussed using R=2 briefly and said "this needs more testing..." I personally had completely forgotten that we used recursive null-move until I saw the code. I didn't keep a detailed changelog as I do today so I have no idea when it was added, other than that it had to be prior to the date of the version in question. And based on version numbers that is a 1992 or 1993 version of the program, right before we started to add singular extensions (which took over 2 years of debugging to get the serious flaws out).



The version of Cray Blitz that is on the internet uses R=1, as that was what we were doing throughout Cray Blitz, never having tested R=2 much at all. But if you look at the code, more than one null can appear in a branch, although two can not appear in succession.

That program is 1991-era, which is getting close to 20 years old. I had forgotten about the "recursive" aspect (since CB is not recursive anyway) until I helped Carey get the code working so he could post it. Fritz didn't come close to being the first with null-move. Could be that Kaissa was the first as I do remember discussions with Donskoy on the subject of search. And I _know_ Beal used it in 1986...

BTW null-move R=2 +SUCKS+ at 4-5 ply depths. From actual experience in 1995 doing that, not from guesswork.
Gerd Isenberg
Posts: 2251
Joined: Wed Mar 08, 2006 8:47 pm
Location: Hattingen, Germany

Re: Hardware vs Software

Post by Gerd Isenberg »

diep wrote: That is all irrelevant. I have a paper from Paradise 1979 here and you could explain that as being nullmove also.

Frans Morsch uses recursive nullmove, gets world champion with it, speaks it out loud, so that is very convincing proof and marketing of it.

Bob mentionned before doing non-recursive R=1 nullmove. Claiming in 2009 he also did do it recursive i'm going to ignore, as in 1994 i was checking the internet at a daily base and haven't seen any evidence nor discussion nor claim about who invented what. Note that nonrecursive nullmove already was implemented in a commercial chess program in the start of the 80s. Amazingly this programmer never has claimed the invention, though he kicked butt with it.

As for elowin: 100 points win for nullmove in todays software is really a lot. I call that a quantumleap forward. Back then at the 4 ply search depths or so it was worth even more of course.

1 ply extra was so so important in these days. Getting outsearched just 1 ply already was deadly start of 90s.

There were discussions about me claiming a good evaluation function mattered these days. The argument against it of some was: "it might cost 2 ply, that might be too expensive".

2 ply search depth difference is no big deal nowadays.

Vincent
I don't totally disagree, while I believe Chrilly's 1993 article had more impact (where he alreaday mentioned Frans Morsch told him about recursive Nullmove in Fritz), than the 1995 title. Anyway, one should credit the Kaissa team for first using Null-Move inside a chess program, as reported in their 1975 paper.

Gerd
Uri Blass
Posts: 10786
Joined: Thu Mar 09, 2006 12:37 am
Location: Tel-Aviv Israel

Re: Hardware vs Software

Post by Uri Blass »

bob wrote:
diep wrote:
Gerd Isenberg wrote:
bob wrote: Also remember that the inverse was true. Keeping null-move and removing LMR was also 40 Elo. So either one, by itself is worth maybe 80. And both together are 120. I used null-move R=1 at the 1989 WCCC event. Burt Wendroff sent me two papers to look at, one published by Don Beal (null move) and one a work-in-progress by Hsu/Campbell on singular extensions). We both looked at them and concluded that null-move would be easy to add (this happened about a month in front of the 1989 WCCC event in Alberta) while singular extensions would likely introduce way too many bugs to be usable in such a short period of time.

So in our 1998 vs 2008 comparison, null move, which is often considered one of the most revolutionary ideas in computer chess (after minimax and then alpha/beta minimax) was already in use. Don Beal was using it in 1986 or so, so even going back 20 years the idea was present.
Adelson-Velskiy, G.M., Arlazarov, V.L., and Donskoy, M.V. (1975). Some Methods of Controlling the Tree Search in Chess Programs. Artificial Ingelligence, Vol. 6, No. 4, pp. 361-371. ISSN 0004-3702. Reprinted (1988) in Computer Chess Compendium (ed. D.N.L. Levy) pp. 129-135. B.T. Batsford, London 1988, ISBN 0-387-91331-9
2. The Order of Move Considerations:
A less trivial idea was that sometimes an extension of the game tree by introducing of dummy move can lead to a reduction of the search tree. In positions with material advantage (with respect to limits) it was permitted to try a so-called "blank" move in which ones own pieces are not moved. Certainly in positions with "Zugzwang" (the side to move must weaken his position) this may lead to errors.

However, the standard of the program's play is still such that such errors do not essentially lower it. (We plan to develop in the future a procedure for recognizing "Zugzwang" positions, and to forbid a blank move in them.) The reduction of search takes place in this case, because the blank move proves to be a closing one and furthermore does not generate complex forcing variations.
That is all irrelevant. I have a paper from Paradise 1979 here and you could explain that as being nullmove also.

Frans Morsch uses recursive nullmove, gets world champion with it, speaks it out loud, so that is very convincing proof and marketing of it.

Bob mentionned before doing non-recursive R=1 nullmove. Claiming in 2009 he also did do it recursive i'm going to ignore, as in 1994 i was checking the internet at a daily base and haven't seen any evidence nor discussion nor claim about who invented what. Note that nonrecursive nullmove already was implemented in a commercial chess program in the start of the 80s. Amazingly this programmer never has claimed the invention, though he kicked butt with it.

As for elowin: 100 points win for nullmove in todays software is really a lot. I call that a quantumleap forward. Back then at the 4 ply search depths or so it was worth even more of course.

1 ply extra was so so important in these days. Getting outsearched just 1 ply already was deadly start of 90s.

There were discussions about me claiming a good evaluation function mattered these days. The argument against it of some was: "it might cost 2 ply, that might be too expensive".

2 ply search depth difference is no big deal nowadays.

Vincent
You can ignore whatever you want. When I _started_ using null move, I used R=1, and allowed just one null-move in any path. Murray Campbell and Hsu discussed recursive (but not two successive) null-moves, using R=1 in their paper on singular extensions. They also discussed using R=2 briefly and said "this needs more testing..." I personally had completely forgotten that we used recursive null-move until I saw the code. I didn't keep a detailed changelog as I do today so I have no idea when it was added, other than that it had to be prior to the date of the version in question. And based on version numbers that is a 1992 or 1993 version of the program, right before we started to add singular extensions (which took over 2 years of debugging to get the serious flaws out).



The version of Cray Blitz that is on the internet uses R=1, as that was what we were doing throughout Cray Blitz, never having tested R=2 much at all. But if you look at the code, more than one null can appear in a branch, although two can not appear in succession.

That program is 1991-era, which is getting close to 20 years old. I had forgotten about the "recursive" aspect (since CB is not recursive anyway) until I helped Carey get the code working so he could post it. Fritz didn't come close to being the first with null-move. Could be that Kaissa was the first as I do remember discussions with Donskoy on the subject of search. And I _know_ Beal used it in 1986...

BTW null-move R=2 +SUCKS+ at 4-5 ply depths. From actual experience in 1995 doing that, not from guesswork.
You can only know that null move sucks in your own program at that time.
I am unsure about other programs.