Hardware vs Software

Discussion of chess software programming and technical issues.

Moderator: Ras

bob
Posts: 20943
Joined: Mon Feb 27, 2006 7:30 pm
Location: Birmingham, AL

Re: Hardware vs Software

Post by bob »

Uri Blass wrote:
bob wrote:
diep wrote:
Gerd Isenberg wrote:
bob wrote: Also remember that the inverse was true. Keeping null-move and removing LMR was also 40 Elo. So either one, by itself is worth maybe 80. And both together are 120. I used null-move R=1 at the 1989 WCCC event. Burt Wendroff sent me two papers to look at, one published by Don Beal (null move) and one a work-in-progress by Hsu/Campbell on singular extensions). We both looked at them and concluded that null-move would be easy to add (this happened about a month in front of the 1989 WCCC event in Alberta) while singular extensions would likely introduce way too many bugs to be usable in such a short period of time.

So in our 1998 vs 2008 comparison, null move, which is often considered one of the most revolutionary ideas in computer chess (after minimax and then alpha/beta minimax) was already in use. Don Beal was using it in 1986 or so, so even going back 20 years the idea was present.
Adelson-Velskiy, G.M., Arlazarov, V.L., and Donskoy, M.V. (1975). Some Methods of Controlling the Tree Search in Chess Programs. Artificial Ingelligence, Vol. 6, No. 4, pp. 361-371. ISSN 0004-3702. Reprinted (1988) in Computer Chess Compendium (ed. D.N.L. Levy) pp. 129-135. B.T. Batsford, London 1988, ISBN 0-387-91331-9
2. The Order of Move Considerations:
A less trivial idea was that sometimes an extension of the game tree by introducing of dummy move can lead to a reduction of the search tree. In positions with material advantage (with respect to limits) it was permitted to try a so-called "blank" move in which ones own pieces are not moved. Certainly in positions with "Zugzwang" (the side to move must weaken his position) this may lead to errors.

However, the standard of the program's play is still such that such errors do not essentially lower it. (We plan to develop in the future a procedure for recognizing "Zugzwang" positions, and to forbid a blank move in them.) The reduction of search takes place in this case, because the blank move proves to be a closing one and furthermore does not generate complex forcing variations.
That is all irrelevant. I have a paper from Paradise 1979 here and you could explain that as being nullmove also.

Frans Morsch uses recursive nullmove, gets world champion with it, speaks it out loud, so that is very convincing proof and marketing of it.

Bob mentionned before doing non-recursive R=1 nullmove. Claiming in 2009 he also did do it recursive i'm going to ignore, as in 1994 i was checking the internet at a daily base and haven't seen any evidence nor discussion nor claim about who invented what. Note that nonrecursive nullmove already was implemented in a commercial chess program in the start of the 80s. Amazingly this programmer never has claimed the invention, though he kicked butt with it.

As for elowin: 100 points win for nullmove in todays software is really a lot. I call that a quantumleap forward. Back then at the 4 ply search depths or so it was worth even more of course.

1 ply extra was so so important in these days. Getting outsearched just 1 ply already was deadly start of 90s.

There were discussions about me claiming a good evaluation function mattered these days. The argument against it of some was: "it might cost 2 ply, that might be too expensive".

2 ply search depth difference is no big deal nowadays.

Vincent
You can ignore whatever you want. When I _started_ using null move, I used R=1, and allowed just one null-move in any path. Murray Campbell and Hsu discussed recursive (but not two successive) null-moves, using R=1 in their paper on singular extensions. They also discussed using R=2 briefly and said "this needs more testing..." I personally had completely forgotten that we used recursive null-move until I saw the code. I didn't keep a detailed changelog as I do today so I have no idea when it was added, other than that it had to be prior to the date of the version in question. And based on version numbers that is a 1992 or 1993 version of the program, right before we started to add singular extensions (which took over 2 years of debugging to get the serious flaws out).



The version of Cray Blitz that is on the internet uses R=1, as that was what we were doing throughout Cray Blitz, never having tested R=2 much at all. But if you look at the code, more than one null can appear in a branch, although two can not appear in succession.

That program is 1991-era, which is getting close to 20 years old. I had forgotten about the "recursive" aspect (since CB is not recursive anyway) until I helped Carey get the code working so he could post it. Fritz didn't come close to being the first with null-move. Could be that Kaissa was the first as I do remember discussions with Donskoy on the subject of search. And I _know_ Beal used it in 1986...

BTW null-move R=2 +SUCKS+ at 4-5 ply depths. From actual experience in 1995 doing that, not from guesswork.
You can only know that null move sucks in your own program at that time.
I am unsure about other programs.
(a) I didn't say "null-move sucks". I said "null-move R=2 sucks". And there were lots of people that came to the same conclusion. In the late 80's, hardware was very slow except when I ran on the Cray and such. If you are doing 5 ply searches, R=2 hides so much the program becomes almost blind to any tactics...
Uri Blass
Posts: 10786
Joined: Thu Mar 09, 2006 12:37 am
Location: Tel-Aviv Israel

Re: Hardware vs Software

Post by Uri Blass »

bob wrote:
Uri Blass wrote:
bob wrote:
diep wrote:
Gerd Isenberg wrote:
bob wrote: Also remember that the inverse was true. Keeping null-move and removing LMR was also 40 Elo. So either one, by itself is worth maybe 80. And both together are 120. I used null-move R=1 at the 1989 WCCC event. Burt Wendroff sent me two papers to look at, one published by Don Beal (null move) and one a work-in-progress by Hsu/Campbell on singular extensions). We both looked at them and concluded that null-move would be easy to add (this happened about a month in front of the 1989 WCCC event in Alberta) while singular extensions would likely introduce way too many bugs to be usable in such a short period of time.

So in our 1998 vs 2008 comparison, null move, which is often considered one of the most revolutionary ideas in computer chess (after minimax and then alpha/beta minimax) was already in use. Don Beal was using it in 1986 or so, so even going back 20 years the idea was present.
Adelson-Velskiy, G.M., Arlazarov, V.L., and Donskoy, M.V. (1975). Some Methods of Controlling the Tree Search in Chess Programs. Artificial Ingelligence, Vol. 6, No. 4, pp. 361-371. ISSN 0004-3702. Reprinted (1988) in Computer Chess Compendium (ed. D.N.L. Levy) pp. 129-135. B.T. Batsford, London 1988, ISBN 0-387-91331-9
2. The Order of Move Considerations:
A less trivial idea was that sometimes an extension of the game tree by introducing of dummy move can lead to a reduction of the search tree. In positions with material advantage (with respect to limits) it was permitted to try a so-called "blank" move in which ones own pieces are not moved. Certainly in positions with "Zugzwang" (the side to move must weaken his position) this may lead to errors.

However, the standard of the program's play is still such that such errors do not essentially lower it. (We plan to develop in the future a procedure for recognizing "Zugzwang" positions, and to forbid a blank move in them.) The reduction of search takes place in this case, because the blank move proves to be a closing one and furthermore does not generate complex forcing variations.
That is all irrelevant. I have a paper from Paradise 1979 here and you could explain that as being nullmove also.

Frans Morsch uses recursive nullmove, gets world champion with it, speaks it out loud, so that is very convincing proof and marketing of it.

Bob mentionned before doing non-recursive R=1 nullmove. Claiming in 2009 he also did do it recursive i'm going to ignore, as in 1994 i was checking the internet at a daily base and haven't seen any evidence nor discussion nor claim about who invented what. Note that nonrecursive nullmove already was implemented in a commercial chess program in the start of the 80s. Amazingly this programmer never has claimed the invention, though he kicked butt with it.

As for elowin: 100 points win for nullmove in todays software is really a lot. I call that a quantumleap forward. Back then at the 4 ply search depths or so it was worth even more of course.

1 ply extra was so so important in these days. Getting outsearched just 1 ply already was deadly start of 90s.

There were discussions about me claiming a good evaluation function mattered these days. The argument against it of some was: "it might cost 2 ply, that might be too expensive".

2 ply search depth difference is no big deal nowadays.

Vincent
You can ignore whatever you want. When I _started_ using null move, I used R=1, and allowed just one null-move in any path. Murray Campbell and Hsu discussed recursive (but not two successive) null-moves, using R=1 in their paper on singular extensions. They also discussed using R=2 briefly and said "this needs more testing..." I personally had completely forgotten that we used recursive null-move until I saw the code. I didn't keep a detailed changelog as I do today so I have no idea when it was added, other than that it had to be prior to the date of the version in question. And based on version numbers that is a 1992 or 1993 version of the program, right before we started to add singular extensions (which took over 2 years of debugging to get the serious flaws out).



The version of Cray Blitz that is on the internet uses R=1, as that was what we were doing throughout Cray Blitz, never having tested R=2 much at all. But if you look at the code, more than one null can appear in a branch, although two can not appear in succession.

That program is 1991-era, which is getting close to 20 years old. I had forgotten about the "recursive" aspect (since CB is not recursive anyway) until I helped Carey get the code working so he could post it. Fritz didn't come close to being the first with null-move. Could be that Kaissa was the first as I do remember discussions with Donskoy on the subject of search. And I _know_ Beal used it in 1986...

BTW null-move R=2 +SUCKS+ at 4-5 ply depths. From actual experience in 1995 doing that, not from guesswork.
You can only know that null move sucks in your own program at that time.
I am unsure about other programs.
(a) I didn't say "null-move sucks". I said "null-move R=2 sucks". And there were lots of people that came to the same conclusion. In the late 80's, hardware was very slow except when I ran on the Cray and such. If you are doing 5 ply searches, R=2 hides so much the program becomes almost blind to any tactics...
I think that things may be dependent on the program.
For example
I wonder how many did checks in the first ply of the qsearch at that time.

Uri
bob
Posts: 20943
Joined: Mon Feb 27, 2006 7:30 pm
Location: Birmingham, AL

Re: Hardware vs Software

Post by bob »

Uri Blass wrote:
bob wrote:
Uri Blass wrote:
bob wrote:
diep wrote:
Gerd Isenberg wrote:
bob wrote: Also remember that the inverse was true. Keeping null-move and removing LMR was also 40 Elo. So either one, by itself is worth maybe 80. And both together are 120. I used null-move R=1 at the 1989 WCCC event. Burt Wendroff sent me two papers to look at, one published by Don Beal (null move) and one a work-in-progress by Hsu/Campbell on singular extensions). We both looked at them and concluded that null-move would be easy to add (this happened about a month in front of the 1989 WCCC event in Alberta) while singular extensions would likely introduce way too many bugs to be usable in such a short period of time.

So in our 1998 vs 2008 comparison, null move, which is often considered one of the most revolutionary ideas in computer chess (after minimax and then alpha/beta minimax) was already in use. Don Beal was using it in 1986 or so, so even going back 20 years the idea was present.
Adelson-Velskiy, G.M., Arlazarov, V.L., and Donskoy, M.V. (1975). Some Methods of Controlling the Tree Search in Chess Programs. Artificial Ingelligence, Vol. 6, No. 4, pp. 361-371. ISSN 0004-3702. Reprinted (1988) in Computer Chess Compendium (ed. D.N.L. Levy) pp. 129-135. B.T. Batsford, London 1988, ISBN 0-387-91331-9
2. The Order of Move Considerations:
A less trivial idea was that sometimes an extension of the game tree by introducing of dummy move can lead to a reduction of the search tree. In positions with material advantage (with respect to limits) it was permitted to try a so-called "blank" move in which ones own pieces are not moved. Certainly in positions with "Zugzwang" (the side to move must weaken his position) this may lead to errors.

However, the standard of the program's play is still such that such errors do not essentially lower it. (We plan to develop in the future a procedure for recognizing "Zugzwang" positions, and to forbid a blank move in them.) The reduction of search takes place in this case, because the blank move proves to be a closing one and furthermore does not generate complex forcing variations.
That is all irrelevant. I have a paper from Paradise 1979 here and you could explain that as being nullmove also.

Frans Morsch uses recursive nullmove, gets world champion with it, speaks it out loud, so that is very convincing proof and marketing of it.

Bob mentionned before doing non-recursive R=1 nullmove. Claiming in 2009 he also did do it recursive i'm going to ignore, as in 1994 i was checking the internet at a daily base and haven't seen any evidence nor discussion nor claim about who invented what. Note that nonrecursive nullmove already was implemented in a commercial chess program in the start of the 80s. Amazingly this programmer never has claimed the invention, though he kicked butt with it.

As for elowin: 100 points win for nullmove in todays software is really a lot. I call that a quantumleap forward. Back then at the 4 ply search depths or so it was worth even more of course.

1 ply extra was so so important in these days. Getting outsearched just 1 ply already was deadly start of 90s.

There were discussions about me claiming a good evaluation function mattered these days. The argument against it of some was: "it might cost 2 ply, that might be too expensive".

2 ply search depth difference is no big deal nowadays.

Vincent
You can ignore whatever you want. When I _started_ using null move, I used R=1, and allowed just one null-move in any path. Murray Campbell and Hsu discussed recursive (but not two successive) null-moves, using R=1 in their paper on singular extensions. They also discussed using R=2 briefly and said "this needs more testing..." I personally had completely forgotten that we used recursive null-move until I saw the code. I didn't keep a detailed changelog as I do today so I have no idea when it was added, other than that it had to be prior to the date of the version in question. And based on version numbers that is a 1992 or 1993 version of the program, right before we started to add singular extensions (which took over 2 years of debugging to get the serious flaws out).



The version of Cray Blitz that is on the internet uses R=1, as that was what we were doing throughout Cray Blitz, never having tested R=2 much at all. But if you look at the code, more than one null can appear in a branch, although two can not appear in succession.

That program is 1991-era, which is getting close to 20 years old. I had forgotten about the "recursive" aspect (since CB is not recursive anyway) until I helped Carey get the code working so he could post it. Fritz didn't come close to being the first with null-move. Could be that Kaissa was the first as I do remember discussions with Donskoy on the subject of search. And I _know_ Beal used it in 1986...

BTW null-move R=2 +SUCKS+ at 4-5 ply depths. From actual experience in 1995 doing that, not from guesswork.
You can only know that null move sucks in your own program at that time.
I am unsure about other programs.
(a) I didn't say "null-move sucks". I said "null-move R=2 sucks". And there were lots of people that came to the same conclusion. In the late 80's, hardware was very slow except when I ran on the Cray and such. If you are doing 5 ply searches, R=2 hides so much the program becomes almost blind to any tactics...
I think that things may be dependent on the program.
For example
I wonder how many did checks in the first ply of the qsearch at that time.

Uri
Cray Blitz did checks in the q-search. You can look at the comments in main.c for Crafty and will notice that _it_ did checks in the q-search since its basic search was cloned from Cray Blitz, in terms of design and not implementation details...

A 5 ply search turns into a 2 ply search when you play a null-move at ply=3 and reduce by 2 extra plies. That blinds you to almost all tactics, especially forks and mate threats...
Uri Blass
Posts: 10786
Joined: Thu Mar 09, 2006 12:37 am
Location: Tel-Aviv Israel

Re: Hardware vs Software

Post by Uri Blass »

bob wrote:
Uri Blass wrote:
bob wrote:
Uri Blass wrote:
bob wrote:
diep wrote:
Gerd Isenberg wrote:
bob wrote: Also remember that the inverse was true. Keeping null-move and removing LMR was also 40 Elo. So either one, by itself is worth maybe 80. And both together are 120. I used null-move R=1 at the 1989 WCCC event. Burt Wendroff sent me two papers to look at, one published by Don Beal (null move) and one a work-in-progress by Hsu/Campbell on singular extensions). We both looked at them and concluded that null-move would be easy to add (this happened about a month in front of the 1989 WCCC event in Alberta) while singular extensions would likely introduce way too many bugs to be usable in such a short period of time.

So in our 1998 vs 2008 comparison, null move, which is often considered one of the most revolutionary ideas in computer chess (after minimax and then alpha/beta minimax) was already in use. Don Beal was using it in 1986 or so, so even going back 20 years the idea was present.
Adelson-Velskiy, G.M., Arlazarov, V.L., and Donskoy, M.V. (1975). Some Methods of Controlling the Tree Search in Chess Programs. Artificial Ingelligence, Vol. 6, No. 4, pp. 361-371. ISSN 0004-3702. Reprinted (1988) in Computer Chess Compendium (ed. D.N.L. Levy) pp. 129-135. B.T. Batsford, London 1988, ISBN 0-387-91331-9
2. The Order of Move Considerations:
A less trivial idea was that sometimes an extension of the game tree by introducing of dummy move can lead to a reduction of the search tree. In positions with material advantage (with respect to limits) it was permitted to try a so-called "blank" move in which ones own pieces are not moved. Certainly in positions with "Zugzwang" (the side to move must weaken his position) this may lead to errors.

However, the standard of the program's play is still such that such errors do not essentially lower it. (We plan to develop in the future a procedure for recognizing "Zugzwang" positions, and to forbid a blank move in them.) The reduction of search takes place in this case, because the blank move proves to be a closing one and furthermore does not generate complex forcing variations.
That is all irrelevant. I have a paper from Paradise 1979 here and you could explain that as being nullmove also.

Frans Morsch uses recursive nullmove, gets world champion with it, speaks it out loud, so that is very convincing proof and marketing of it.

Bob mentionned before doing non-recursive R=1 nullmove. Claiming in 2009 he also did do it recursive i'm going to ignore, as in 1994 i was checking the internet at a daily base and haven't seen any evidence nor discussion nor claim about who invented what. Note that nonrecursive nullmove already was implemented in a commercial chess program in the start of the 80s. Amazingly this programmer never has claimed the invention, though he kicked butt with it.

As for elowin: 100 points win for nullmove in todays software is really a lot. I call that a quantumleap forward. Back then at the 4 ply search depths or so it was worth even more of course.

1 ply extra was so so important in these days. Getting outsearched just 1 ply already was deadly start of 90s.

There were discussions about me claiming a good evaluation function mattered these days. The argument against it of some was: "it might cost 2 ply, that might be too expensive".

2 ply search depth difference is no big deal nowadays.

Vincent
You can ignore whatever you want. When I _started_ using null move, I used R=1, and allowed just one null-move in any path. Murray Campbell and Hsu discussed recursive (but not two successive) null-moves, using R=1 in their paper on singular extensions. They also discussed using R=2 briefly and said "this needs more testing..." I personally had completely forgotten that we used recursive null-move until I saw the code. I didn't keep a detailed changelog as I do today so I have no idea when it was added, other than that it had to be prior to the date of the version in question. And based on version numbers that is a 1992 or 1993 version of the program, right before we started to add singular extensions (which took over 2 years of debugging to get the serious flaws out).



The version of Cray Blitz that is on the internet uses R=1, as that was what we were doing throughout Cray Blitz, never having tested R=2 much at all. But if you look at the code, more than one null can appear in a branch, although two can not appear in succession.

That program is 1991-era, which is getting close to 20 years old. I had forgotten about the "recursive" aspect (since CB is not recursive anyway) until I helped Carey get the code working so he could post it. Fritz didn't come close to being the first with null-move. Could be that Kaissa was the first as I do remember discussions with Donskoy on the subject of search. And I _know_ Beal used it in 1986...

BTW null-move R=2 +SUCKS+ at 4-5 ply depths. From actual experience in 1995 doing that, not from guesswork.
You can only know that null move sucks in your own program at that time.
I am unsure about other programs.
(a) I didn't say "null-move sucks". I said "null-move R=2 sucks". And there were lots of people that came to the same conclusion. In the late 80's, hardware was very slow except when I ran on the Cray and such. If you are doing 5 ply searches, R=2 hides so much the program becomes almost blind to any tactics...
I think that things may be dependent on the program.
For example
I wonder how many did checks in the first ply of the qsearch at that time.

Uri
Cray Blitz did checks in the q-search. You can look at the comments in main.c for Crafty and will notice that _it_ did checks in the q-search since its basic search was cloned from Cray Blitz, in terms of design and not implementation details...

A 5 ply search turns into a 2 ply search when you play a null-move at ply=3 and reduce by 2 extra plies. That blinds you to almost all tactics, especially forks and mate threats...
in case of checks in the first ply of the qsearch it is not correct for mate threats because your qsearch can always detect mate in 1 so mate in 1 threats are never pruned.

Uri
bob
Posts: 20943
Joined: Mon Feb 27, 2006 7:30 pm
Location: Birmingham, AL

Re: Hardware vs Software

Post by bob »

Gerd Isenberg wrote:
diep wrote: That is all irrelevant. I have a paper from Paradise 1979 here and you could explain that as being nullmove also.

Frans Morsch uses recursive nullmove, gets world champion with it, speaks it out loud, so that is very convincing proof and marketing of it.

Bob mentionned before doing non-recursive R=1 nullmove. Claiming in 2009 he also did do it recursive i'm going to ignore, as in 1994 i was checking the internet at a daily base and haven't seen any evidence nor discussion nor claim about who invented what. Note that nonrecursive nullmove already was implemented in a commercial chess program in the start of the 80s. Amazingly this programmer never has claimed the invention, though he kicked butt with it.

As for elowin: 100 points win for nullmove in todays software is really a lot. I call that a quantumleap forward. Back then at the 4 ply search depths or so it was worth even more of course.

1 ply extra was so so important in these days. Getting outsearched just 1 ply already was deadly start of 90s.

There were discussions about me claiming a good evaluation function mattered these days. The argument against it of some was: "it might cost 2 ply, that might be too expensive".

2 ply search depth difference is no big deal nowadays.

Vincent
I don't totally disagree, while I believe Chrilly's 1993 article had more impact (where he alreaday mentioned Frans Morsch told him about recursive Nullmove in Fritz), than the 1995 title. Anyway, one should credit the Kaissa team for first using Null-Move inside a chess program, as reported in their 1975 paper.

Gerd
There are earlier references to null-move. For example, the first version of Blitz used a null-move for a different reason. I wrote about 50K lines of code (total blitz code was about 100K lines when I rewrote using brute-force around 1977 or so) that detected threats for the side to move. A major component of "blitz" was a "causality analysis" that would wait until a problem was found (score drop, material lost, position broken, etc) and then try to figure out why it happened (for example, you deserted a pawn and left it hanging). It was easier to discover the fix for a problem that had already occurred, as opposed to discovering the problem itself first and try to not cause it in the first place.

I used null-move back then to "pass" and see if the opponent could do something damaging to me. (find a problem with my position). Then the causality function could pinpoint the problem since we had a small tree search that illustrated the problem, and we could more easily find a "fix".

That idea is pretty close to what we do today, but we let the "NULL" move discover that there is no problem, so we don't need to search, or there is a problem, so we continue searching normally to see if we can fix it as we advance deeper into the tree...

I was doing that in 1973-4, where the basic idea came to me while reading Berliner's Ph.D. dissertation on "PATSOC". I don't remember the exact title although I have it in my office. So we were all thinking along those lines back then, we just used a "null" in different (but very useful) ways...
bob
Posts: 20943
Joined: Mon Feb 27, 2006 7:30 pm
Location: Birmingham, AL

Re: Hardware vs Software

Post by bob »

Uri Blass wrote:
bob wrote:
Uri Blass wrote:
bob wrote:
Uri Blass wrote:
bob wrote:
diep wrote:
Gerd Isenberg wrote:
bob wrote: Also remember that the inverse was true. Keeping null-move and removing LMR was also 40 Elo. So either one, by itself is worth maybe 80. And both together are 120. I used null-move R=1 at the 1989 WCCC event. Burt Wendroff sent me two papers to look at, one published by Don Beal (null move) and one a work-in-progress by Hsu/Campbell on singular extensions). We both looked at them and concluded that null-move would be easy to add (this happened about a month in front of the 1989 WCCC event in Alberta) while singular extensions would likely introduce way too many bugs to be usable in such a short period of time.

So in our 1998 vs 2008 comparison, null move, which is often considered one of the most revolutionary ideas in computer chess (after minimax and then alpha/beta minimax) was already in use. Don Beal was using it in 1986 or so, so even going back 20 years the idea was present.
Adelson-Velskiy, G.M., Arlazarov, V.L., and Donskoy, M.V. (1975). Some Methods of Controlling the Tree Search in Chess Programs. Artificial Ingelligence, Vol. 6, No. 4, pp. 361-371. ISSN 0004-3702. Reprinted (1988) in Computer Chess Compendium (ed. D.N.L. Levy) pp. 129-135. B.T. Batsford, London 1988, ISBN 0-387-91331-9
2. The Order of Move Considerations:
A less trivial idea was that sometimes an extension of the game tree by introducing of dummy move can lead to a reduction of the search tree. In positions with material advantage (with respect to limits) it was permitted to try a so-called "blank" move in which ones own pieces are not moved. Certainly in positions with "Zugzwang" (the side to move must weaken his position) this may lead to errors.

However, the standard of the program's play is still such that such errors do not essentially lower it. (We plan to develop in the future a procedure for recognizing "Zugzwang" positions, and to forbid a blank move in them.) The reduction of search takes place in this case, because the blank move proves to be a closing one and furthermore does not generate complex forcing variations.
That is all irrelevant. I have a paper from Paradise 1979 here and you could explain that as being nullmove also.

Frans Morsch uses recursive nullmove, gets world champion with it, speaks it out loud, so that is very convincing proof and marketing of it.

Bob mentionned before doing non-recursive R=1 nullmove. Claiming in 2009 he also did do it recursive i'm going to ignore, as in 1994 i was checking the internet at a daily base and haven't seen any evidence nor discussion nor claim about who invented what. Note that nonrecursive nullmove already was implemented in a commercial chess program in the start of the 80s. Amazingly this programmer never has claimed the invention, though he kicked butt with it.

As for elowin: 100 points win for nullmove in todays software is really a lot. I call that a quantumleap forward. Back then at the 4 ply search depths or so it was worth even more of course.

1 ply extra was so so important in these days. Getting outsearched just 1 ply already was deadly start of 90s.

There were discussions about me claiming a good evaluation function mattered these days. The argument against it of some was: "it might cost 2 ply, that might be too expensive".

2 ply search depth difference is no big deal nowadays.

Vincent
You can ignore whatever you want. When I _started_ using null move, I used R=1, and allowed just one null-move in any path. Murray Campbell and Hsu discussed recursive (but not two successive) null-moves, using R=1 in their paper on singular extensions. They also discussed using R=2 briefly and said "this needs more testing..." I personally had completely forgotten that we used recursive null-move until I saw the code. I didn't keep a detailed changelog as I do today so I have no idea when it was added, other than that it had to be prior to the date of the version in question. And based on version numbers that is a 1992 or 1993 version of the program, right before we started to add singular extensions (which took over 2 years of debugging to get the serious flaws out).



The version of Cray Blitz that is on the internet uses R=1, as that was what we were doing throughout Cray Blitz, never having tested R=2 much at all. But if you look at the code, more than one null can appear in a branch, although two can not appear in succession.

That program is 1991-era, which is getting close to 20 years old. I had forgotten about the "recursive" aspect (since CB is not recursive anyway) until I helped Carey get the code working so he could post it. Fritz didn't come close to being the first with null-move. Could be that Kaissa was the first as I do remember discussions with Donskoy on the subject of search. And I _know_ Beal used it in 1986...

BTW null-move R=2 +SUCKS+ at 4-5 ply depths. From actual experience in 1995 doing that, not from guesswork.
You can only know that null move sucks in your own program at that time.
I am unsure about other programs.
(a) I didn't say "null-move sucks". I said "null-move R=2 sucks". And there were lots of people that came to the same conclusion. In the late 80's, hardware was very slow except when I ran on the Cray and such. If you are doing 5 ply searches, R=2 hides so much the program becomes almost blind to any tactics...
I think that things may be dependent on the program.
For example
I wonder how many did checks in the first ply of the qsearch at that time.

Uri
Cray Blitz did checks in the q-search. You can look at the comments in main.c for Crafty and will notice that _it_ did checks in the q-search since its basic search was cloned from Cray Blitz, in terms of design and not implementation details...

A 5 ply search turns into a 2 ply search when you play a null-move at ply=3 and reduce by 2 extra plies. That blinds you to almost all tactics, especially forks and mate threats...
in case of checks in the first ply of the qsearch it is not correct for mate threats because your qsearch can always detect mate in 1 so mate in 1 threats are never pruned.

Uri
"mate threat" does not mean "mate in 1 move threats". It means "moves which threaten mate without regard to the number of moves". And further, some programs that do q-search checks don't do them if there are no checks in the basic search, to save time. Further blinding them at very shallow depths. Fortunately nobody comes close to 5 ply searches today, even at a fraction of a second per move.
Uri Blass
Posts: 10786
Joined: Thu Mar 09, 2006 12:37 am
Location: Tel-Aviv Israel

Re: Hardware vs Software

Post by Uri Blass »

bob wrote:
Uri Blass wrote:
bob wrote:
Uri Blass wrote:
bob wrote:
Uri Blass wrote:
bob wrote:
diep wrote:
Gerd Isenberg wrote:
bob wrote: Also remember that the inverse was true. Keeping null-move and removing LMR was also 40 Elo. So either one, by itself is worth maybe 80. And both together are 120. I used null-move R=1 at the 1989 WCCC event. Burt Wendroff sent me two papers to look at, one published by Don Beal (null move) and one a work-in-progress by Hsu/Campbell on singular extensions). We both looked at them and concluded that null-move would be easy to add (this happened about a month in front of the 1989 WCCC event in Alberta) while singular extensions would likely introduce way too many bugs to be usable in such a short period of time.

So in our 1998 vs 2008 comparison, null move, which is often considered one of the most revolutionary ideas in computer chess (after minimax and then alpha/beta minimax) was already in use. Don Beal was using it in 1986 or so, so even going back 20 years the idea was present.
Adelson-Velskiy, G.M., Arlazarov, V.L., and Donskoy, M.V. (1975). Some Methods of Controlling the Tree Search in Chess Programs. Artificial Ingelligence, Vol. 6, No. 4, pp. 361-371. ISSN 0004-3702. Reprinted (1988) in Computer Chess Compendium (ed. D.N.L. Levy) pp. 129-135. B.T. Batsford, London 1988, ISBN 0-387-91331-9
2. The Order of Move Considerations:
A less trivial idea was that sometimes an extension of the game tree by introducing of dummy move can lead to a reduction of the search tree. In positions with material advantage (with respect to limits) it was permitted to try a so-called "blank" move in which ones own pieces are not moved. Certainly in positions with "Zugzwang" (the side to move must weaken his position) this may lead to errors.

However, the standard of the program's play is still such that such errors do not essentially lower it. (We plan to develop in the future a procedure for recognizing "Zugzwang" positions, and to forbid a blank move in them.) The reduction of search takes place in this case, because the blank move proves to be a closing one and furthermore does not generate complex forcing variations.
That is all irrelevant. I have a paper from Paradise 1979 here and you could explain that as being nullmove also.

Frans Morsch uses recursive nullmove, gets world champion with it, speaks it out loud, so that is very convincing proof and marketing of it.

Bob mentionned before doing non-recursive R=1 nullmove. Claiming in 2009 he also did do it recursive i'm going to ignore, as in 1994 i was checking the internet at a daily base and haven't seen any evidence nor discussion nor claim about who invented what. Note that nonrecursive nullmove already was implemented in a commercial chess program in the start of the 80s. Amazingly this programmer never has claimed the invention, though he kicked butt with it.

As for elowin: 100 points win for nullmove in todays software is really a lot. I call that a quantumleap forward. Back then at the 4 ply search depths or so it was worth even more of course.

1 ply extra was so so important in these days. Getting outsearched just 1 ply already was deadly start of 90s.

There were discussions about me claiming a good evaluation function mattered these days. The argument against it of some was: "it might cost 2 ply, that might be too expensive".

2 ply search depth difference is no big deal nowadays.

Vincent
You can ignore whatever you want. When I _started_ using null move, I used R=1, and allowed just one null-move in any path. Murray Campbell and Hsu discussed recursive (but not two successive) null-moves, using R=1 in their paper on singular extensions. They also discussed using R=2 briefly and said "this needs more testing..." I personally had completely forgotten that we used recursive null-move until I saw the code. I didn't keep a detailed changelog as I do today so I have no idea when it was added, other than that it had to be prior to the date of the version in question. And based on version numbers that is a 1992 or 1993 version of the program, right before we started to add singular extensions (which took over 2 years of debugging to get the serious flaws out).



The version of Cray Blitz that is on the internet uses R=1, as that was what we were doing throughout Cray Blitz, never having tested R=2 much at all. But if you look at the code, more than one null can appear in a branch, although two can not appear in succession.

That program is 1991-era, which is getting close to 20 years old. I had forgotten about the "recursive" aspect (since CB is not recursive anyway) until I helped Carey get the code working so he could post it. Fritz didn't come close to being the first with null-move. Could be that Kaissa was the first as I do remember discussions with Donskoy on the subject of search. And I _know_ Beal used it in 1986...

BTW null-move R=2 +SUCKS+ at 4-5 ply depths. From actual experience in 1995 doing that, not from guesswork.
You can only know that null move sucks in your own program at that time.
I am unsure about other programs.
(a) I didn't say "null-move sucks". I said "null-move R=2 sucks". And there were lots of people that came to the same conclusion. In the late 80's, hardware was very slow except when I ran on the Cray and such. If you are doing 5 ply searches, R=2 hides so much the program becomes almost blind to any tactics...
I think that things may be dependent on the program.
For example
I wonder how many did checks in the first ply of the qsearch at that time.

Uri
Cray Blitz did checks in the q-search. You can look at the comments in main.c for Crafty and will notice that _it_ did checks in the q-search since its basic search was cloned from Cray Blitz, in terms of design and not implementation details...

A 5 ply search turns into a 2 ply search when you play a null-move at ply=3 and reduce by 2 extra plies. That blinds you to almost all tactics, especially forks and mate threats...
in case of checks in the first ply of the qsearch it is not correct for mate threats because your qsearch can always detect mate in 1 so mate in 1 threats are never pruned.

Uri
"mate threat" does not mean "mate in 1 move threats". It means "moves which threaten mate without regard to the number of moves". And further, some programs that do q-search checks don't do them if there are no checks in the basic search, to save time. Further blinding them at very shallow depths. Fortunately nobody comes close to 5 ply searches today, even at a fraction of a second per move.
mate in 1 threats are clearly part of the main threat and if you make 5 ply search you will probably not detect mate in 2 threats even without null move because of having small depth.

I think that we cannot get conclusion about the value of null move with slow hardware based on programs of 1970 but only based on program of today and I did not see a proof that programs of today are weaker with null move relative to not using null move even at 1 second per game time control.

Uri
rebel777

Re: Hardware vs Software

Post by rebel777 »

Tord Romstad wrote: By the way, it's amusing to see that LMR is now generally accepted as effective. Back when I started advocating it, the technique was largely abandoned since many years, and those few programmers I managed to convince to give it a try mostly reported that it didn't work for them.
Amusing indeed. The Problem with LMR is finding the right parameters. And that takes time. I was lucky to find the right values in my first try never to find better ones in the xx improvement-tries thereafter. The world up-side-down so to say :wink:

Ed
bob
Posts: 20943
Joined: Mon Feb 27, 2006 7:30 pm
Location: Birmingham, AL

Re: Hardware vs Software

Post by bob »

rebel777 wrote:
Tord Romstad wrote: By the way, it's amusing to see that LMR is now generally accepted as effective. Back when I started advocating it, the technique was largely abandoned since many years, and those few programmers I managed to convince to give it a try mostly reported that it didn't work for them.
Amusing indeed. The Problem with LMR is finding the right parameters. And that takes time. I was lucky to find the right values in my first try never to find better ones in the xx improvement-tries thereafter. The world up-side-down so to say :wink:

Ed
I think the problem was caused by the original "source".

When Bruce Moreland and I first played with this idea, we didn't get anywhere with it. But we didn't spend a lot of time trying to figure out what to not reduce. And we gave up quickly thinking that it was too dangerous, which it probably was back then.

Then along came fruit and "history pruning" and everyone jumped on the bandwagon again, but using a badly broken approach based on history counters. I found that the history counter stuff was simply "noise" and no tuning of the history threshold made any difference in how Crafty played, nor after testing on our cluster to verify that Crafty wasn't broken, that it made no difference in how fruit played either. But rather than giving up, I continued to experiment and arrived at what I do today, which does work. It is not a +100 or +200 Elo win, if you saw my cluster results I posted a few months back (LMR is worth about +60, if you already have null move. Null-move is worth about +60 if you already have LMR. If you use neither, you are missing out on about +120 elo total, which is significant. But since null-move and LMR have an "overlap" (both do reduced-depth searches) you get less from one if you already have the other...

And there is likely more left to do in selecting which branches to not reduce, such as the "sticky trans/ref table" idea used in deep-thought / deep-blue to make sure that once you extend a position, you continue to extend it for at least another iteration or two. Same idea could apply to reductions...
rebel777

Re: Hardware vs Software

Post by rebel777 »

bob wrote:
rebel777 wrote:
Tord Romstad wrote: By the way, it's amusing to see that LMR is now generally accepted as effective. Back when I started advocating it, the technique was largely abandoned since many years, and those few programmers I managed to convince to give it a try mostly reported that it didn't work for them.
Amusing indeed. The Problem with LMR is finding the right parameters. And that takes time. I was lucky to find the right values in my first try never to find better ones in the xx improvement-tries thereafter. The world up-side-down so to say :wink:

Ed
I think the problem was caused by the original "source".

When Bruce Moreland and I first played with this idea, we didn't get anywhere with it. But we didn't spend a lot of time trying to figure out what to not reduce. And we gave up quickly thinking that it was too dangerous, which it probably was back then.

Then along came fruit and "history pruning" and everyone jumped on the bandwagon again, but using a badly broken approach based on history counters. I found that the history counter stuff was simply "noise" and no tuning of the history threshold made any difference in how Crafty played, nor after testing on our cluster to verify that Crafty wasn't broken, that it made no difference in how fruit played either. But rather than giving up, I continued to experiment and arrived at what I do today, which does work. It is not a +100 or +200 Elo win, if you saw my cluster results I posted a few months back (LMR is worth about +60, if you already have null move. Null-move is worth about +60 if you already have LMR. If you use neither, you are missing out on about +120 elo total, which is significant. But since null-move and LMR have an "overlap" (both do reduced-depth searches) you get less from one if you already have the other...

And there is likely more left to do in selecting which branches to not reduce, such as the "sticky trans/ref table" idea used in deep-thought / deep-blue to make sure that once you extend a position, you continue to extend it for at least another iteration or two. Same idea could apply to reductions...
It's hard to define fixed elo jumps for new idea's as it varies from program to program based what's already in a program. LMR gave mine only +30 because of the various "static reductions" I do. The same applied for null-move only +20-30 because of my own selective search stuff.

Ed