more on fixed nodes

Discussion of chess software programming and technical issues.

Moderators: hgm, Rebel, chrisw

User avatar
Don
Posts: 5106
Joined: Tue Apr 29, 2008 4:27 pm

Re: more on fixed nodes

Post by Don »

delete
Last edited by Don on Thu Nov 12, 2009 10:57 pm, edited 1 time in total.
User avatar
Don
Posts: 5106
Joined: Tue Apr 29, 2008 4:27 pm

Re: more on fixed nodes

Post by Don »

Hi Bob,

Let's zero in on one issue at a time. First of all I'm an ally on the cluster testing and opening book methodology of using a huge number of openings. The reason I played devils advocate is because I wanted to give you a sense of what it feels like you are doing to me. Taking a simple concept that is only slightly flawed, and constructing ridiculous scenarios to try to prove it is seriously broken. At least that is what it seems to me.

So the issue I want to focus on is your contention that time control games is perfect with resource allocation. Here is what I believe:

1. Running time control games is an extremely effective way to test and it's very accurate.
2. Like all other methods it is slightly flawed.

Note that I am not claiming it is more flawed than anything else in particular nor am I claiming there is a less flawed way. In fact I think it's a great thing. But I also think judicious fixed depth testing is extremely effect, and also slightly flawed.

So I am going to take something that is good and only slightly flawed, and try to be unreasonable about it, the same way I claim you are doing:

If you make a change to your program, then recompile it, the speed of the new program is not an exact result of the change. Compilers are strange beasts and just moving a routine slightly can affect the performance of the program. It may have nothing to do with your change.

Go to one of your machines and start up the program in single processor mode, and run a 20 ply search from the opening position. Then quit the program, start it up again and run this test again. I think you will find that the 2 times will not be exactly the same.

Now start 2 programs on the same machine and run these same timing tests. Did the instance you started first take the same amount of time as the second instance? I'll bet it didn't. This is called "processor affinity" and I seriously doubt your cluster is immune to it.

You know much more about this stuff than I do, but if you want to tell me that not a single game would ever get a changed result due to caching issues, processor affinity, or chaos in the system and that there actually is no chaos, then I would have to believe you to be a fraud (but I know that you are not, so relax.)

Do you have /dev/urandom working on these clusters or are they off? It does not matter if they are on or off, the principles that make them work are still alive in your machine. There is still chaos and entropy and no amount of hand waving will change that.

Now, do I believe that this completely invalidates the results of my testing? Of course not, otherwise I would not use this time control testing myself like you do. In fact I do most of my testing on a quad core linux machine and I utilize all 4 cores, which is certainly a chaotic system. The tester itself is efficient, but it is in fact yet another process running on the machine. There are always 8 chess programs in memory at a time as well as the tester itself and of course chess programs get killed and restarted and PGN files are getting stored on disk. These are all chaotic events, so don't try to tell me your machine is running clean when you must certainly be doing something similar.

Now we come to using fixed depth testing. It is not subject at all to these chaotic events that even the cleanest of systems exhibit. Since I do have a time adjusted feature, I am back in chaos land, but not when it comes to isolating the change from the implementation.

So is that flawed? Of course it is. Is it flawed beyond all reason and hope and only useful for basic debugging to produce a crash in a deterministic way? You think so, but you don't back it up with anything but anecdotes and what I consider superstitions.

I am afraid you or your spokesman guy who defends you will come back with an email trying to prove that time control testing works - once again trying to pretend that I disagree when I don't. It's totally ridiculous to keep trying this because I agree with those things. The same with the opening book.

And I don't care if it the opening book was discussed 100 years ago when I was not here - what kind of argument is that? The way your advocate put it was offensive, that I did not have a right to talk about something chess related because he had already "hammered it out" with others when I wasn't here. There is nothing discussed on these pages that should not be re-visited or that you can come to conclusions that should not be questioned by others.

I'm not addressing you on this issue, Bob, but he was offensive, he basically accused us of petty jealously over your cluster. He said, "One thing is clear, the statistical facts proven by cluster testing are extremely disagreeable, especially to people who have little hope of obtaining access to cluster/cloud resources." Then he counseled you to stop mentioning your cluster as it was invoking this petty jealousy in us. I hope this does not involve any name calling between us because I do have respect for you as an engineer. I just happen to disagree on this issue. Is this guy like a Bob Hyatt groupie or something? How come I don't get to have any groupies?
User avatar
mhull
Posts: 13447
Joined: Wed Mar 08, 2006 9:02 pm
Location: Dallas, Texas
Full name: Matthew Hull

Re: more on fixed nodes

Post by mhull »

Don wrote:I am afraid you or your spokesman guy who defends you will come back with an email trying to prove that time control testing works - once again trying to pretend that I disagree when I don't. It's totally ridiculous to keep trying this because I agree with those things. The same with the opening book.
Nobody pretends you disagree. The issue is with skinning the same cat (accurately measuring small improvements) with another method requiring much smaller computing resources. Here you seem to disagree with Bob, sans objective data. Bob is shooting that down because he's been there already and has the T-shirt.
Don wrote: And I don't care if it the opening book was discussed 100 years ago when I was not here - what kind of argument is that? The way your advocate put it was offensive, that I did not have a right to talk about something chess related because he had already "hammered it out" with others when I wasn't here. There is nothing discussed on these pages that should not be re-visited or that you can come to conclusions that should not be questioned by others.


It's not an opening book, but 4000 starting positions chosen for their wide variety. Discussed at length some time ago. IMO, you should seacrh the forum and read those discussions. It would increase the value of subsequent contributions on the subject by not covering points already addressed.
Don wrote: I'm not addressing you on this issue, Bob, but he was offensive, he basically accused us of petty jealously over your cluster. He said, "One thing is clear, the statistical facts proven by cluster testing are extremely disagreeable, especially to people who have little hope of obtaining access to cluster/cloud resources." Then he counseled you to stop mentioning your cluster as it was invoking this petty jealousy in us. I hope this does not involve any name calling between us because I do have respect for you as an engineer. I just happen to disagree on this issue. Is this guy like a Bob Hyatt groupie or something? How come I don't get to have any groupies?
Hey man, I've been your groupie in the past. I bought Rex Chess back in 1990, (IIRC) for my i386 25Mhz and loved it! I hope you'll be releasing another great product in the future and wish you the best of luck with it.
Matthew Hull
bob
Posts: 20943
Joined: Mon Feb 27, 2006 7:30 pm
Location: Birmingham, AL

Re: more on fixed nodes

Post by bob »

Don wrote:Hi Bob,

Let's zero in on one issue at a time. First of all I'm an ally on the cluster testing and opening book methodology of using a huge number of openings. The reason I played devils advocate is because I wanted to give you a sense of what it feels like you are doing to me. Taking a simple concept that is only slightly flawed, and constructing ridiculous scenarios to try to prove it is seriously broken. At least that is what it seems to me.

So the issue I want to focus on is your contention that time control games is perfect with resource allocation. Here is what I believe:

1. Running time control games is an extremely effective way to test and it's very accurate.
2. Like all other methods it is slightly flawed.

Note that I am not claiming it is more flawed than anything else in particular nor am I claiming there is a less flawed way. In fact I think it's a great thing. But I also think judicious fixed depth testing is extremely effect, and also slightly flawed.

So I am going to take something that is good and only slightly flawed, and try to be unreasonable about it, the same way I claim you are doing:

If you make a change to your program, then recompile it, the speed of the new program is not an exact result of the change. Compilers are strange beasts and just moving a routine slightly can affect the performance of the program. It may have nothing to do with your change.

Go to one of your machines and start up the program in single processor mode, and run a 20 ply search from the opening position. Then quit the program, start it up again and run this test again. I think you will find that the 2 times will not be exactly the same.
Actually this is not very common in my linux testing. The problem you are describing is a result of memory/cache aliasing/addressing issues, where sometimes too many memory blocks map to the same cache set and the mapping is not equi-distributed across the entire cache. There are good memory managers that solve this.. But for the sake of argument:

Code: Select all

log.003:              time=28.61  mat=0  n=95007683  fh=90%  nps=3.3M
log.004:              time=28.50  mat=0  n=95007683  fh=90%  nps=3.3M
log.005:              time=28.57  mat=0  n=95007683  fh=90%  nps=3.3M
log.006:              time=28.49  mat=0  n=95007683  fh=90%  nps=3.3M
So nps is the same to one decimal place. Time varies from 28.49 to 28.61, about .1 seconds max, And if I run that test for 2 minutes each, I still see that .1 second range (that is about 1/2 a scheduling quantum and is about as accurately as one can time things based on wall-clock time). That turns into a variance of less than 0.1%. Very small. A factor of 200% (doubling) or 50% (halving) is worth about 50 elo. So what is 0.1%? Not an Elo point, at best.

Now, a worse case:

Code: Select all

             nps=2.2M
             nps=3.3M 
             nps=4.2M
             nps=5.2M
Those represent four positions. opening, middlegame, early endgame, late endgame. A factor of 2.5x from opening to endgame. Or something in the range of 60 Elo or so. What does your fixed node (or fixed depth) searches do to my results. You have to pick some NPS to figure out how many nodes you are going to allow me to search. Let's say you want 5 seconds per move roughly. So do you give me 2.2M x 5 (11 million nodes) which will make me search very quickly and use about 2 seconds per move in the endgame, giving my opponent a 2.5x time advantage, or do you give me 5.2M x 5 (26 million nodes) which will give me a 2.5x time _advantage+ in the opening?

Those are not fabricated numbers. Anyone can verify them with any version of Crafty around. And that, my friend, _does_ represent a _huge_ bias.

Now suppose you do as I did 3 years ago and work on the "when to trade or not trade code". And because you play better endgames (you believe) you encourage earlier trading of pieces (but not pawns). So I reach the endgame more frequently in the set of games where I am testing this change. Which node count did you choose for the tests? 11M? Now I play 70 elo weaker in the endgame (2.5x slower since you are cheating me out of 15 million nodes or so and only letting me search 11m when I could search 26M). Or you give me 26M and this change looks brilliant. But would it look brilliant if we were simply searching 5 secs per move where everything is fair? How can you tell without going back and doing the real search.

This is _exactly_ the situation I debugged back when we started the cluster-testing stuff, because fixed nodes certainly eliminated the variability of the results. But it added a bias that left me scratching my head when changes that appeared to be better caused us to play worse when I tested using time.

That's not made up, the numbers above are real. Not guesswork. Not hyperbole created to make a point. Just real data that shows my issue with this kind of testing. Tell me how to eliminate that, and I'll accept your idea. But do _not_ try to tell me to just pick opponents that have the same speed-up/slowdown as crafty in various phases of the game. That's not possible.

Now start 2 programs on the same machine and run these same timing tests. Did the instance you started first take the same amount of time as the second instance? I'll bet it didn't. This is called "processor affinity" and I seriously doubt your cluster is immune to it.
They are perfectly repeatable from that regard. The linux processor scheduler will lock one program on one CPU and it won't move. If you run three, one will run on one cpu, the other two will run on the other CPU, to provide less cache thrashing by switching between cores/processors and their distinct L1 (and sometimes L2) caches.

Again, you are on the wrong scale. You are looking at 0.1% variations. I am looking at 250% variations caused by fixed node testing. They are not comparable.

You know much more about this stuff than I do, but if you want to tell me that not a single game would ever get a changed result due to caching issues, processor affinity, or chaos in the system and that there actually is no chaos, then I would have to believe you to be a fraud (but I know that you are not, so relax.)
You do realize that we have already proven this several times, verified by myself and others. Take any program you want, and have it play a fixed-node game against a different opponent. Now change the fixed node count by +1. Then by -1. Then by +2. Then by -2. Someone tried this with fruit, and played one hundred games. Out of those 100 games, only two were the same. This was all discussed last year and the year before, and has been rehashed once or twice this year. It is a known problem, and there's no solution. But that "jitter" is not in the form of a huge bias, it is in the form of a small random perturbation, which is a significantly different issue. Enough games washes that randomness out. More games will _never_ wash the bias away.

Do you have /dev/urandom working on these clusters or are they off? It does not matter if they are on or off, the principles that make them work are still alive in your machine. There is still chaos and entropy and no amount of hand waving will change that.
And again. Scale. Think scale. I don't care about 0.1% variations. I care about 200%+ biases.

Now, do I believe that this completely invalidates the results of my testing? Of course not, otherwise I would not use this time control testing myself like you do. In fact I do most of my testing on a quad core linux machine and I utilize all 4 cores, which is certainly a chaotic system. The tester itself is efficient, but it is in fact yet another process running on the machine. There are always 8 chess programs in memory at a time as well as the tester itself and of course chess programs get killed and restarted and PGN files are getting stored on disk. These are all chaotic events, so don't try to tell me your machine is running clean when you must certainly be doing something similar.
Never claimed anything like that. "very clean"??? Absolutely. "perfectly clean"??? NO way to ensure that. But for 0.1% variance, I am not concerned. It is both random and tiny, and statistical analysis done in BayesElo takes care of that.


Now we come to using fixed depth testing. It is not subject at all to these chaotic events that even the cleanest of systems exhibit. Since I do have a time adjusted feature, I am back in chaos land, but not when it comes to isolating the change from the implementation.
Again, tell me how you can adjust the above examples. And I didn't try to pick extreme examples, just opening, middlegame and finally late endgame. There are certain kinds of these positions that speed up or slow down even more. As do different programs behave differently. Ferret was the most pronounced I saw, speeding up by a factor of 4+ in the endgame. 2.5x is bad enough to wreck this train.


So is that flawed? Of course it is. Is it flawed beyond all reason and hope and only useful for basic debugging to produce a crash in a deterministic way? You think so, but you don't back it up with anything but anecdotes and what I consider superstitions.
Only because you missed all the previous discussions would you say that. I gave data above that is anything but "anecdotal". Let's see how you try to work your way out of the maze I presented before we continue debating correctness of fixed nodes. There is much work for you to do to solve the issues I raised. And I raised them with _real_ data from my program. Which just happens to be the only program I care about testing.


I am afraid you or your spokesman guy who defends you will come back with an email trying to prove that time control testing works - once again trying to pretend that I disagree when I don't. It's totally ridiculous to keep trying this because I agree with those things. The same with the opening book.

And I don't care if it the opening book was discussed 100 years ago when I was not here - what kind of argument is that? The way your advocate put it was offensive, that I did not have a right to talk about something chess related because he had already "hammered it out" with others when I wasn't here. There is nothing discussed on these pages that should not be re-visited or that you can come to conclusions that should not be questioned by others.

I'm not addressing you on this issue, Bob, but he was offensive, he basically accused us of petty jealously over your cluster. He said, "One thing is clear, the statistical facts proven by cluster testing are extremely disagreeable, especially to people who have little hope of obtaining access to cluster/cloud resources." Then he counseled you to stop mentioning your cluster as it was invoking this petty jealousy in us. I hope this does not involve any name calling between us because I do have respect for you as an engineer. I just happen to disagree on this issue. Is this guy like a Bob Hyatt groupie or something? How come I don't get to have any groupies?
I think he felt the same way I did, in that this has been discussed many times in the past. Sort of like jumping into a nuclear physics discussion and then asking someone to explain what this E = MC^2 crap is all about. Book positions vs random starting positions has been discussed ad nauseum. And I've repeatedly explained why I chose to do it with random known book positions, rather than positions from my tournament book. I could have produced either kind with no difficulty, but I want to see overall improvement, not improvement in selected openings only.

I don't think anybody was trying to be offensive. It's just that this subject has been discussed so many times in the past, and then becomes the center of yet another argument.
User avatar
Don
Posts: 5106
Joined: Tue Apr 29, 2008 4:27 pm

Re: more on fixed nodes

Post by Don »

Bob, you are doing the exact same thing yet again. Repeat after me and read my lips. I am not claiming that any of these minor variations makes any real difference. in other words I AGREE WITH YOU ON THIS.

Did you hear that? Should I say it again?

Go back very carefully and read what I actually said, about how I am going to illustrate how you can take something insignificant and blow it completely out of proportion. That's what I was doing! Anybody home out there?

I doesn't surprise me one bit that we keep going around in circles about this since you seem to only be concerned about what you are going to say next and don't read anything anyone else says.




bob wrote:
Don wrote:Hi Bob,

Let's zero in on one issue at a time. First of all I'm an ally on the cluster testing and opening book methodology of using a huge number of openings. The reason I played devils advocate is because I wanted to give you a sense of what it feels like you are doing to me. Taking a simple concept that is only slightly flawed, and constructing ridiculous scenarios to try to prove it is seriously broken. At least that is what it seems to me.

So the issue I want to focus on is your contention that time control games is perfect with resource allocation. Here is what I believe:

1. Running time control games is an extremely effective way to test and it's very accurate.
2. Like all other methods it is slightly flawed.

Note that I am not claiming it is more flawed than anything else in particular nor am I claiming there is a less flawed way. In fact I think it's a great thing. But I also think judicious fixed depth testing is extremely effect, and also slightly flawed.

So I am going to take something that is good and only slightly flawed, and try to be unreasonable about it, the same way I claim you are doing:

If you make a change to your program, then recompile it, the speed of the new program is not an exact result of the change. Compilers are strange beasts and just moving a routine slightly can affect the performance of the program. It may have nothing to do with your change.

Go to one of your machines and start up the program in single processor mode, and run a 20 ply search from the opening position. Then quit the program, start it up again and run this test again. I think you will find that the 2 times will not be exactly the same.
Actually this is not very common in my linux testing. The problem you are describing is a result of memory/cache aliasing/addressing issues, where sometimes too many memory blocks map to the same cache set and the mapping is not equi-distributed across the entire cache. There are good memory managers that solve this.. But for the sake of argument:

Code: Select all

log.003:              time=28.61  mat=0  n=95007683  fh=90%  nps=3.3M
log.004:              time=28.50  mat=0  n=95007683  fh=90%  nps=3.3M
log.005:              time=28.57  mat=0  n=95007683  fh=90%  nps=3.3M
log.006:              time=28.49  mat=0  n=95007683  fh=90%  nps=3.3M
So nps is the same to one decimal place. Time varies from 28.49 to 28.61, about .1 seconds max, And if I run that test for 2 minutes each, I still see that .1 second range (that is about 1/2 a scheduling quantum and is about as accurately as one can time things based on wall-clock time). That turns into a variance of less than 0.1%. Very small. A factor of 200% (doubling) or 50% (halving) is worth about 50 elo. So what is 0.1%? Not an Elo point, at best.

Now, a worse case:

Code: Select all

             nps=2.2M
             nps=3.3M 
             nps=4.2M
             nps=5.2M
Those represent four positions. opening, middlegame, early endgame, late endgame. A factor of 2.5x from opening to endgame. Or something in the range of 60 Elo or so. What does your fixed node (or fixed depth) searches do to my results. You have to pick some NPS to figure out how many nodes you are going to allow me to search. Let's say you want 5 seconds per move roughly. So do you give me 2.2M x 5 (11 million nodes) which will make me search very quickly and use about 2 seconds per move in the endgame, giving my opponent a 2.5x time advantage, or do you give me 5.2M x 5 (26 million nodes) which will give me a 2.5x time _advantage+ in the opening?

Those are not fabricated numbers. Anyone can verify them with any version of Crafty around. And that, my friend, _does_ represent a _huge_ bias.

Now suppose you do as I did 3 years ago and work on the "when to trade or not trade code". And because you play better endgames (you believe) you encourage earlier trading of pieces (but not pawns). So I reach the endgame more frequently in the set of games where I am testing this change. Which node count did you choose for the tests? 11M? Now I play 70 elo weaker in the endgame (2.5x slower since you are cheating me out of 15 million nodes or so and only letting me search 11m when I could search 26M). Or you give me 26M and this change looks brilliant. But would it look brilliant if we were simply searching 5 secs per move where everything is fair? How can you tell without going back and doing the real search.

This is _exactly_ the situation I debugged back when we started the cluster-testing stuff, because fixed nodes certainly eliminated the variability of the results. But it added a bias that left me scratching my head when changes that appeared to be better caused us to play worse when I tested using time.

That's not made up, the numbers above are real. Not guesswork. Not hyperbole created to make a point. Just real data that shows my issue with this kind of testing. Tell me how to eliminate that, and I'll accept your idea. But do _not_ try to tell me to just pick opponents that have the same speed-up/slowdown as crafty in various phases of the game. That's not possible.

Now start 2 programs on the same machine and run these same timing tests. Did the instance you started first take the same amount of time as the second instance? I'll bet it didn't. This is called "processor affinity" and I seriously doubt your cluster is immune to it.
They are perfectly repeatable from that regard. The linux processor scheduler will lock one program on one CPU and it won't move. If you run three, one will run on one cpu, the other two will run on the other CPU, to provide less cache thrashing by switching between cores/processors and their distinct L1 (and sometimes L2) caches.

Again, you are on the wrong scale. You are looking at 0.1% variations. I am looking at 250% variations caused by fixed node testing. They are not comparable.

You know much more about this stuff than I do, but if you want to tell me that not a single game would ever get a changed result due to caching issues, processor affinity, or chaos in the system and that there actually is no chaos, then I would have to believe you to be a fraud (but I know that you are not, so relax.)
You do realize that we have already proven this several times, verified by myself and others. Take any program you want, and have it play a fixed-node game against a different opponent. Now change the fixed node count by +1. Then by -1. Then by +2. Then by -2. Someone tried this with fruit, and played one hundred games. Out of those 100 games, only two were the same. This was all discussed last year and the year before, and has been rehashed once or twice this year. It is a known problem, and there's no solution. But that "jitter" is not in the form of a huge bias, it is in the form of a small random perturbation, which is a significantly different issue. Enough games washes that randomness out. More games will _never_ wash the bias away.

Do you have /dev/urandom working on these clusters or are they off? It does not matter if they are on or off, the principles that make them work are still alive in your machine. There is still chaos and entropy and no amount of hand waving will change that.
And again. Scale. Think scale. I don't care about 0.1% variations. I care about 200%+ biases.

Now, do I believe that this completely invalidates the results of my testing? Of course not, otherwise I would not use this time control testing myself like you do. In fact I do most of my testing on a quad core linux machine and I utilize all 4 cores, which is certainly a chaotic system. The tester itself is efficient, but it is in fact yet another process running on the machine. There are always 8 chess programs in memory at a time as well as the tester itself and of course chess programs get killed and restarted and PGN files are getting stored on disk. These are all chaotic events, so don't try to tell me your machine is running clean when you must certainly be doing something similar.
Never claimed anything like that. "very clean"??? Absolutely. "perfectly clean"??? NO way to ensure that. But for 0.1% variance, I am not concerned. It is both random and tiny, and statistical analysis done in BayesElo takes care of that.


Now we come to using fixed depth testing. It is not subject at all to these chaotic events that even the cleanest of systems exhibit. Since I do have a time adjusted feature, I am back in chaos land, but not when it comes to isolating the change from the implementation.
Again, tell me how you can adjust the above examples. And I didn't try to pick extreme examples, just opening, middlegame and finally late endgame. There are certain kinds of these positions that speed up or slow down even more. As do different programs behave differently. Ferret was the most pronounced I saw, speeding up by a factor of 4+ in the endgame. 2.5x is bad enough to wreck this train.


So is that flawed? Of course it is. Is it flawed beyond all reason and hope and only useful for basic debugging to produce a crash in a deterministic way? You think so, but you don't back it up with anything but anecdotes and what I consider superstitions.
Only because you missed all the previous discussions would you say that. I gave data above that is anything but "anecdotal". Let's see how you try to work your way out of the maze I presented before we continue debating correctness of fixed nodes. There is much work for you to do to solve the issues I raised. And I raised them with _real_ data from my program. Which just happens to be the only program I care about testing.


I am afraid you or your spokesman guy who defends you will come back with an email trying to prove that time control testing works - once again trying to pretend that I disagree when I don't. It's totally ridiculous to keep trying this because I agree with those things. The same with the opening book.

And I don't care if it the opening book was discussed 100 years ago when I was not here - what kind of argument is that? The way your advocate put it was offensive, that I did not have a right to talk about something chess related because he had already "hammered it out" with others when I wasn't here. There is nothing discussed on these pages that should not be re-visited or that you can come to conclusions that should not be questioned by others.

I'm not addressing you on this issue, Bob, but he was offensive, he basically accused us of petty jealously over your cluster. He said, "One thing is clear, the statistical facts proven by cluster testing are extremely disagreeable, especially to people who have little hope of obtaining access to cluster/cloud resources." Then he counseled you to stop mentioning your cluster as it was invoking this petty jealousy in us. I hope this does not involve any name calling between us because I do have respect for you as an engineer. I just happen to disagree on this issue. Is this guy like a Bob Hyatt groupie or something? How come I don't get to have any groupies?
I think he felt the same way I did, in that this has been discussed many times in the past. Sort of like jumping into a nuclear physics discussion and then asking someone to explain what this E = MC^2 crap is all about. Book positions vs random starting positions has been discussed ad nauseum. And I've repeatedly explained why I chose to do it with random known book positions, rather than positions from my tournament book. I could have produced either kind with no difficulty, but I want to see overall improvement, not improvement in selected openings only.

I don't think anybody was trying to be offensive. It's just that this subject has been discussed so many times in the past, and then becomes the center of yet another argument.
Eizenhammer

Re: more on fixed nodes

Post by Eizenhammer »

Don wrote:Bob, ...
It doesn't surprise me one bit that we keep going around in circles about this since you seem to only be concerned about what you are going to say next and don't read anything anyone else says.
This is his attitude and his procedure. To hope for anything else is just a waste of time and brain. If god came here and explained some tricks Bob would say that he does not care, because he has been there, has done it 40 years ago and everything else is a nono ...
User avatar
Don
Posts: 5106
Joined: Tue Apr 29, 2008 4:27 pm

Re: more on fixed nodes

Post by Don »

mhull wrote:
Don wrote: And I don't care if it the opening book was discussed 100 years ago when I was not here - what kind of argument is that? The way your advocate put it was offensive, that I did not have a right to talk about something chess related because he had already "hammered it out" with others when I wasn't here. There is nothing discussed on these pages that should not be re-visited or that you can come to conclusions that should not be questioned by others.


It's not an opening book, but 4000 starting positions chosen for their wide variety. Discussed at length some time ago. IMO, you should seacrh the forum and read those discussions. It would increase the value of subsequent contributions on the subject by not covering points already addressed.
Now why would I want to go to so much trouble when this discussion has nothing to do with the openings and I am in complete agreement with Bob anyway on this?
Don wrote: I'm not addressing you on this issue, Bob, but he was offensive, he basically accused us of petty jealously over your cluster. He said, "One thing is clear, the statistical facts proven by cluster testing are extremely disagreeable, especially to people who have little hope of obtaining access to cluster/cloud resources." Then he counseled you to stop mentioning your cluster as it was invoking this petty jealousy in us. I hope this does not involve any name calling between us because I do have respect for you as an engineer. I just happen to disagree on this issue. Is this guy like a Bob Hyatt groupie or something? How come I don't get to have any groupies?
Hey man, I've been your groupie in the past. I bought Rex Chess back in 1990, (IIRC) for my i386 25Mhz and loved it! I hope you'll be releasing another great product in the future and wish you the best of luck with it.
Ok, I forgive you now. Are you going to be a loyal groupie or will you be fickle? What's it going to be, me or Bob!
User avatar
Don
Posts: 5106
Joined: Tue Apr 29, 2008 4:27 pm

Re: more on fixed nodes

Post by Don »

I think he felt the same way I did, in that this has been discussed many times in the past. Sort of like jumping into a nuclear physics discussion and then asking someone to explain what this E = MC^2 crap is all about. Book positions vs random starting positions has been discussed ad nauseum. And I've repeatedly explained why I chose to do it with random known book positions, rather than positions from my tournament book. I could have produced either kind with no difficulty, but I want to see overall improvement, not improvement in selected openings only.
Except that I was not the least bit interested in talking about the openings.

Anyway, he is my groupie now, I stole him from you.
bob
Posts: 20943
Joined: Mon Feb 27, 2006 7:30 pm
Location: Birmingham, AL

Re: more on fixed nodes

Post by bob »

Why did you ignore the _important_ part of my post? You raised the question about variability. I simply pointed out how small this was, so that I could show the nps variability and show how _large_ that is. It is the "large" stuff that wrecks tests, not the small stuff.
bob
Posts: 20943
Joined: Mon Feb 27, 2006 7:30 pm
Location: Birmingham, AL

Re: more on fixed nodes

Post by bob »

Eizenhammer wrote:
Don wrote:Bob, ...
It doesn't surprise me one bit that we keep going around in circles about this since you seem to only be concerned about what you are going to say next and don't read anything anyone else says.
This is his attitude and his procedure. To hope for anything else is just a waste of time and brain. If god came here and explained some tricks Bob would say that he does not care, because he has been there, has done it 40 years ago and everything else is a nono ...
If God tried to convince me that something works, when I have tried it and found that it did not, then you are correct. I don't assume "perfection" from any entity. That way I an _never_ disappointed.