hash collisions

Discussion of chess software programming and technical issues.

Moderators: hgm, Rebel, chrisw

User avatar
hgm
Posts: 27851
Joined: Fri Mar 10, 2006 10:06 am
Location: Amsterdam
Full name: H G Muller

Re: hash collisions

Post by hgm »

OK, I stand corrected. But that was long after you started dumping on me, and so can be hardly called an excuse for appearing here.
bob
Posts: 20943
Joined: Mon Feb 27, 2006 7:30 pm
Location: Birmingham, AL

Re: hash collisions

Post by bob »

chrisw wrote: Fri Feb 21, 2020 1:11 pm
bob wrote: Fri Feb 21, 2020 3:46 am Now that was pretty rambling and uninformative. What bad thing has happened in this thread. There is zero doubt that no software is bug-free. Too many giants in computer science harp on this. There is little disagreement that bugs that produce execution issues (crashes, hangs, bad/illegal moves, etc) should be fixed. There is some debate about other small types of accepted bugs. One simple example is pseudo-legal move generation. You can search at one illegal position for every pseudo-legal move that is not legal. Is that seriously different from what HGM wrote? NEITHER causes any bad move to be displayed nor any crash, etc.

Why we have to argue about actual facts that are well-known is food for thought. But of course, feel free to show me any computer science authority that says large programs can be written 100% bug-free. THEN there would be room for debate. I've not seen any, but that doesn't say there are none. Just like not seeing any evidence of bugs doesn't mean there are none. (sound familiar).
A partial search of a haystack for needles neither proves there are no more needles than it proves there are more needles.

You are only using one half of the logic haystack/needle because of a BELIEF that there are inevitably more needles. In other words, a circular argument, the premise is the conclusion.

I say two, er three or four, things:

1. It is possible, in a defined system such as chess, to design sufficient destructive tests, including internal self-verification, and after fifty years history of the hobby, to find everything that could go wrong. And for those things that are inclined to go wrong to have built in robust defences.

2. Of extreme importance is design team or programmer ATTITUDE to bugs. Programmers who think bugs are inevitable makes a lot less effort at being bug free and tend to leave fixing (or often arguing its somebody else’s fault) to a case by case basis for the future.

3. The belief in the inevitable infinite nature of bugs in computer chess is also predicated by the concept of high complexity of computer chess. I argue that this concept is self-serving, it suits us to believe we are struggling with and conquering complex domains. Hey, some people even build their identities around this notion of themselves and the degree of the complexity. I posit the opposite, computer chess programming is fairly straightforward, bounded and made easier year on year.
I will repeat this again: "do some reading". I taught software engineering for YEARS. Good texts cover all the bases. Various design methodologies, team organization and who does what, verification / testing, on and on and on. And EVERY book always points out that NO methodology will produce bug-free code. They COULD, given sufficient time, but sufficient time would be measured in decades, not weeks or months as required in real software development. So (a) a bug free program could theoretically be written; (b) significant testing could eliminate ALL bugs in a program. (c) neither of these has any significant probability of happening. Unfortunately. What do you do when you design a parallel algorithm, it works flawlessly for years, then suddenly starts to fail with high regularity? You investigate, and discover that your dependence on sequential writes to memory is now violated with out-of-order writes on modern processors. Should you have thought of that when designing/coding? That is just one tiny example (that did bite some of us, by the way). There are so many bugs that are not caused directly by broken programming, but instead are caused by unexpected behavior as hardware changes. If it crashes or fails, it is a bug. Cause is irrelevant.

One can check input exhaustively (here exhaustively means with much effort, not testing with every possible input). And if you are careful, you can guarantee that the input won't cause your program to crash or misbehave. But such code is very small. But what about a search, which is recursive, with all kinds of special cases based on pruning (LMR, LMP, singular extensions, etc.) where a small bug can be introduced that testing doesn't reveal. I just posted one such bug in Crafty that did not fail any testing approach I used. I just happened to spot it looking at the code trying to track down a different bug.

Debugging difficulty is directly related to the # of different pathways through the program. IE the reason for the "path coverage" paradigm. If you have 16 if() statements, you have 2^16 paths. I just checked this for Crafty's search code only (leaving out all the ancillary stuff like book code, initialization, benchmarking, parsing, command processing and such. There are 523 if() statements alone. This does not include hidden ifs like the ? operator. 2^523 might be a pretty big set of inputs for the program. And in reality, that is not enough. If you have if(c1&&c2&&c3) you need to have 8 test cases so that each of those three conditions take on their 0/1 result in all combinations with the others. That is one reason I say testing won't solve the problem. And again "testing will show the presence of bugs but not their absence" is based on exactly that problem. You might have the above if, where a good programmer will put the most likely false condition first to avoid the other two tests most of the time. By the time you get through the first tests, of which 99.999999999% of the time one or the other is false, the if is pretty much false. UNTIL the first two are true, and the last condition is mal-formed due to a bug. But testing doesn't catch it.

If you say something doesn't exist, you have an impossible time of proving that. At any point you MIGHT find something that disproves it, but until you search all possible cases, you can't say you have proved there are no bugs. If you say something does exist, you only need one example and you are done. You don't have to keep testing forever. Hence the "prove the existence of bugs, but not that there are none".

We can go round and round with this, but I am confident in all the people writing books and papers on software engineering, and feel their conclusions are spot on since they ALL agree - no program can be shown to be bug-free except for toy programs. If you believe you have written bug-free code (say > 1000 lines of fairly complex code) then that's your belief. If you believe testing can expose ALL bugs, then that is your belief. But a lot of technical folks disagree. You are the only person I have encountered that says that (a) bug-free code can be written and (b) testing can prove a program to be bug-free...

If you are happy in that rarified atmosphere, good. I have spent too many years fixing bugs in every program I have written, or every program I had to work on that I did not write. Nothing insulting here, just simple facts that are well-known in computer science / software engineering.
bob
Posts: 20943
Joined: Mon Feb 27, 2006 7:30 pm
Location: Birmingham, AL

Re: hash collisions

Post by bob »

Rebel wrote: Fri Feb 21, 2020 8:46 am It's all about the demand.

Compilers can't afford to produce buggy executables.

Financial programs can't afford flawed numbers.

NASA can't afford buggy software when they launch people into space.

Boeing can't afford buggy software and sometimes it happens anyway with devastating results.

Does bug free software exist? Of course. Because that's the demand. But not after tireless testing.

Bug-free chess programs is not a good example because there is no demand. Nobody gets hurt by a bug.
Let's take a few of those.

I had a couple of students that worked at Nasa. On the first space shuttle launch, they got down almost to liftoff but could not get the main pumps to turn on to "light the fire". The problem turned out to be a parallel program. Four separate computers had to initialize, run their sanity checks, and then the results were compared and had to match. They had matched thousands of times in testing. On the day of the first launch. They did not. And they would not through several resets. They found the bug, but it took some time. And if you know NASA, it had been tested almost exhaustively before "showtime". On Apollo 11, the flight control computer on the LEM crashed repeatedly during the descent to the moon. Neil Armstrong had the daunting task of landing with no nav computer data to help. Fortunately, he was up to the task. So NASA could not produce bug-free code.

GCC has produced buggy code many times. As has Apple's CLANG, as has microsoft's visual C++. There are plenty of publicly available bug reports for all three. I found/fixed some in gcc when I started to use long long.

My local bank, very large chain, had a software bug that hid certain transactions from the end user. They found/fixed it. There have been a few of those with credit cards, etc. Three weeks ago, my wife stopped by the bank to withdraw cash from the ATM. Thing took her card, processed the transaction, gave her a receipt and her card back. No cash. She checked the account from her phone and it showed the money had been withdrawn. She went inside, after a couple of hours where they went out and counted the cash in the machine, they verified their total was off by the amount of her withdrawal and it was fixed. No bugs?

We all know about Boeing here of late. They have found bug after bug after bug in their 737 max software. Which DID cause a crash. Which killed over 300 people. No bugs? Same in the US F35 fighter jet. Hardly a good example you chose here.

This discussion has been about the claim that bug-free complex code is doable within reasonable time-frame (IE maybe 1 month to 5 years, not 5000 years). It doesn't matter how the bug will affect end users. IE I would hope that heart pacemakers are thoroughly tested. Since they use computers, bluetooth, etc. Fortunately the code is actually small enough that it can be tested enough to be ALMOST 100% safe. That chess programs don't cause anybody to be hurt doesn't matter. The people that use it for analysis might disagree if the program gives them a bad move in a correspondence game and they lose because of it.

So again, all of your examples are bad because all are incorrect due to the examples given above. So back to square one. You keep claiming compilers are 100% correct. As someone who has worked on them many times, that is not just wrong, it is incredibly wrote. Bugzilla might open your eyes.
chrisw
Posts: 4346
Joined: Tue Apr 03, 2012 4:28 pm

Re: hash collisions

Post by chrisw »

bob wrote: Fri Feb 21, 2020 9:38 pm
Rebel wrote: Fri Feb 21, 2020 8:46 am It's all about the demand.

Compilers can't afford to produce buggy executables.

Financial programs can't afford flawed numbers.

NASA can't afford buggy software when they launch people into space.

Boeing can't afford buggy software and sometimes it happens anyway with devastating results.

Does bug free software exist? Of course. Because that's the demand. But not after tireless testing.

Bug-free chess programs is not a good example because there is no demand. Nobody gets hurt by a bug.
Let's take a few of those.

I had a couple of students that worked at Nasa. On the first space shuttle launch, they got down almost to liftoff but could not get the main pumps to turn on to "light the fire". The problem turned out to be a parallel program. Four separate computers had to initialize, run their sanity checks, and then the results were compared and had to match. They had matched thousands of times in testing. On the day of the first launch. They did not. And they would not through several resets. They found the bug, but it took some time. And if you know NASA, it had been tested almost exhaustively before "showtime". On Apollo 11, the flight control computer on the LEM crashed repeatedly during the descent to the moon. Neil Armstrong had the daunting task of landing with no nav computer data to help. Fortunately, he was up to the task. So NASA could not produce bug-free code.

GCC has produced buggy code many times. As has Apple's CLANG, as has microsoft's visual C++. There are plenty of publicly available bug reports for all three. I found/fixed some in gcc when I started to use long long.

My local bank, very large chain, had a software bug that hid certain transactions from the end user. They found/fixed it. There have been a few of those with credit cards, etc. Three weeks ago, my wife stopped by the bank to withdraw cash from the ATM. Thing took her card, processed the transaction, gave her a receipt and her card back. No cash. She checked the account from her phone and it showed the money had been withdrawn. She went inside, after a couple of hours where they went out and counted the cash in the machine, they verified their total was off by the amount of her withdrawal and it was fixed. No bugs?

We all know about Boeing here of late. They have found bug after bug after bug in their 737 max software. Which DID cause a crash. Which killed over 300 people. No bugs? Same in the US F35 fighter jet. Hardly a good example you chose here.

This discussion has been about the claim that bug-free complex code is doable within reasonable time-frame (IE maybe 1 month to 5 years, not 5000 years). It doesn't matter how the bug will affect end users. IE I would hope that heart pacemakers are thoroughly tested. Since they use computers, bluetooth, etc. Fortunately the code is actually small enough that it can be tested enough to be ALMOST 100% safe. That chess programs don't cause anybody to be hurt doesn't matter. The people that use it for analysis might disagree if the program gives them a bad move in a correspondence game and they lose because of it.

So again, all of your examples are bad because all are incorrect due to the examples given above. So back to square one. You keep claiming compilers are 100% correct. As someone who has worked on them many times, that is not just wrong, it is incredibly wrote. Bugzilla might open your eyes.
Those anecdotes prove nothing. One anecdote about your wife at an ATM doesn’t scale to all financial software products. Maybe there was a paper jam. Maybe the bank staff thought it better not to have an argument, so they shelled out. I got shelled out once by my bank when I told them my cash withdrawal was 20 GBP down, they didn’t count anything, they just okay, we believe you and gave me 20.
Banks actually seem pretty robust, clearly there are risks in the operations of capital markets, and risk is built in to fractional reserve banking, but adding up all the electrons seems the least of their problems.
Appeals to authority don’t prove anything either. Authority is often wrong, and step progress wouldn’t be possible if authority was always right, as showed Einstein and Newton before him, just for starters.
What you haven’t done is demonstrate chess engines have crossed over into a level of complexity where there will always be bugs. Squaring the number of if statements or whatever it was and generating some large number was smacks of desperation, I have to say. Chess is a small bounded system with fixed rules. You can test everything, you can do. Didn’t think I’ld be teaching an American about can do.
User avatar
Ovyron
Posts: 4556
Joined: Tue Jul 03, 2007 4:30 am

Re: hash collisions

Post by Ovyron »

Haha, someone challenged me to find the fastest mate in a position and I've been working on that for weeks.

One thing I would say is that ensuring that a program is bug-free is like ensuring that in a position you have the fastest mate. It's always possible that some attacking move you haven't checked yet mates faster, so you can't be sure that you have it until you exhaust all attacks, just like you can't be sure of bug freedom.
User avatar
Rebel
Posts: 7025
Joined: Thu Aug 18, 2011 12:04 pm
Full name: Ed Schröder

Re: hash collisions

Post by Rebel »

1. Waving with your credentials among your peers is comparable with Neil deGrasse Tyson (to name a someone) doing so in the presence of giants like Richard Dawkins, Lawrence Krauss etc. Pointless and patronizing.

2. Unless I misunderstand, you pretend it's impossible to write 100% bug free code. I think you are wrong.

3. The compiler I use has a few glitches in the interface, it never produced faulty code (executables).
90% of coding is debugging, the other 10% is writing bugs.
chrisw
Posts: 4346
Joined: Tue Apr 03, 2012 4:28 pm

Re: hash collisions

Post by chrisw »

bob wrote: Fri Feb 21, 2020 9:27 pm
chrisw wrote: Fri Feb 21, 2020 1:11 pm
bob wrote: Fri Feb 21, 2020 3:46 am Now that was pretty rambling and uninformative. What bad thing has happened in this thread. There is zero doubt that no software is bug-free. Too many giants in computer science harp on this. There is little disagreement that bugs that produce execution issues (crashes, hangs, bad/illegal moves, etc) should be fixed. There is some debate about other small types of accepted bugs. One simple example is pseudo-legal move generation. You can search at one illegal position for every pseudo-legal move that is not legal. Is that seriously different from what HGM wrote? NEITHER causes any bad move to be displayed nor any crash, etc.

Why we have to argue about actual facts that are well-known is food for thought. But of course, feel free to show me any computer science authority that says large programs can be written 100% bug-free. THEN there would be room for debate. I've not seen any, but that doesn't say there are none. Just like not seeing any evidence of bugs doesn't mean there are none. (sound familiar).
A partial search of a haystack for needles neither proves there are no more needles than it proves there are more needles.

You are only using one half of the logic haystack/needle because of a BELIEF that there are inevitably more needles. In other words, a circular argument, the premise is the conclusion.

I say two, er three or four, things:

1. It is possible, in a defined system such as chess, to design sufficient destructive tests, including internal self-verification, and after fifty years history of the hobby, to find everything that could go wrong. And for those things that are inclined to go wrong to have built in robust defences.

2. Of extreme importance is design team or programmer ATTITUDE to bugs. Programmers who think bugs are inevitable makes a lot less effort at being bug free and tend to leave fixing (or often arguing its somebody else’s fault) to a case by case basis for the future.

3. The belief in the inevitable infinite nature of bugs in computer chess is also predicated by the concept of high complexity of computer chess. I argue that this concept is self-serving, it suits us to believe we are struggling with and conquering complex domains. Hey, some people even build their identities around this notion of themselves and the degree of the complexity. I posit the opposite, computer chess programming is fairly straightforward, bounded and made easier year on year.
I will repeat this again: "do some reading". I taught software engineering for YEARS.
Well I’ve been supervising programmers for years, getting them to do things they argued were impossible, again and again. Started first programming in 1968, which was to help academics get up and running on the maths dept newly acquired ancient computer. I done a lot of other things too, written more programs on more systems across the range of languages than I can imagine, got out more products than I can imagine, seen more than I can imagine from more angles I can imagine and still realise that’s only 0.00001% of what there is or what I need and probably much of that is wrong, and brought up five children who are all already way past me, several outstandingly do, taught, and done business and kept farm animals, and done a bit of politics, been elected (unbelievably) and done life stuff and and and. You’re not the only expert, so are many others, and you know something about experts? They all disagree with each other whilst all believing what they know us is the only truth, so allow me to take your pontifications with some grains of salt. And allow me also to inform you that you only know about 0.000001% and much of that is going to be wrong too. You see, you ascribe to the inevitable bug theory and assume you’re right about everything. I ascribe to the bugs can be eliminated in systems of low complexity like chess and assume everybody is wrong about everything, me included. You’re just wronger than me, mainly because you’re so ridiculously convinced you’re right about everything. One more point, then I’ll shut up, since this is all kind of fruitless angels on the head of pin counting.

Good texts cover all the bases. Various design methodologies, team organization and who does what, verification / testing, on and on and on. And EVERY book always points out that NO methodology will produce bug-free code. They COULD, given sufficient time, but sufficient time would be measured in decades, not weeks or months as required in real software development. So (a) a bug free program could theoretically be written; (b) significant testing could eliminate ALL bugs in a program. (c) neither of these has any significant probability of happening. Unfortunately. What do you do when you design a parallel algorithm, it works flawlessly for years, then suddenly starts to fail with high regularity? You investigate, and discover that your dependence on sequential writes to memory is now violated with out-of-order writes on modern processors.
that would be like saying my old British car had a bug when I brought it to France, because the steering wheel was on the wrong side.
It’s not a bug if the hardware develops/changes on you. Before the problem, you were bug free. Like my car.

Enough. You are way too deeply personally invested in the idea that a chess program is way more complex than it actually is.

Should you have thought of that when designing/coding? That is just one tiny example (that did bite some of us, by the way). There are so many bugs that are not caused directly by broken programming, but instead are caused by unexpected behavior as hardware changes. If it crashes or fails, it is a bug. Cause is irrelevant.

One can check input exhaustively (here exhaustively means with much effort, not testing with every possible input). And if you are careful, you can guarantee that the input won't cause your program to crash or misbehave. But such code is very small. But what about a search, which is recursive, with all kinds of special cases based on pruning (LMR, LMP, singular extensions, etc.) where a small bug can be introduced that testing doesn't reveal. I just posted one such bug in Crafty that did not fail any testing approach I used. I just happened to spot it looking at the code trying to track down a different bug.

Debugging difficulty is directly related to the # of different pathways through the program. IE the reason for the "path coverage" paradigm. If you have 16 if() statements, you have 2^16 paths. I just checked this for Crafty's search code only (leaving out all the ancillary stuff like book code, initialization, benchmarking, parsing, command processing and such. There are 523 if() statements alone. This does not include hidden ifs like the ? operator. 2^523 might be a pretty big set of inputs for the program. And in reality, that is not enough. If you have if(c1&&c2&&c3) you need to have 8 test cases so that each of those three conditions take on their 0/1 result in all combinations with the others. That is one reason I say testing won't solve the problem. And again "testing will show the presence of bugs but not their absence" is based on exactly that problem. You might have the above if, where a good programmer will put the most likely false condition first to avoid the other two tests most of the time. By the time you get through the first tests, of which 99.999999999% of the time one or the other is false, the if is pretty much false. UNTIL the first two are true, and the last condition is mal-formed due to a bug. But testing doesn't catch it.

If you say something doesn't exist, you have an impossible time of proving that. At any point you MIGHT find something that disproves it, but until you search all possible cases, you can't say you have proved there are no bugs. If you say something does exist, you only need one example and you are done. You don't have to keep testing forever. Hence the "prove the existence of bugs, but not that there are none".

We can go round and round with this, but I am confident in all the people writing books and papers on software engineering, and feel their conclusions are spot on since they ALL agree - no program can be shown to be bug-free except for toy programs. If you believe you have written bug-free code (say > 1000 lines of fairly complex code) then that's your belief. If you believe testing can expose ALL bugs, then that is your belief. But a lot of technical folks disagree. You are the only person I have encountered that says that (a) bug-free code can be written and (b) testing can prove a program to be bug-free...

If you are happy in that rarified atmosphere, good. I have spent too many years fixing bugs in every program I have written, or every program I had to work on that I did not write. Nothing insulting here, just simple facts that are well-known in computer science / software engineering.
bob
Posts: 20943
Joined: Mon Feb 27, 2006 7:30 pm
Location: Birmingham, AL

Re: hash collisions

Post by bob »

chrisw wrote: Fri Feb 21, 2020 10:44 pm
bob wrote: Fri Feb 21, 2020 9:38 pm
Rebel wrote: Fri Feb 21, 2020 8:46 am It's all about the demand.

Compilers can't afford to produce buggy executables.

Financial programs can't afford flawed numbers.

NASA can't afford buggy software when they launch people into space.

Boeing can't afford buggy software and sometimes it happens anyway with devastating results.

Does bug free software exist? Of course. Because that's the demand. But not after tireless testing.

Bug-free chess programs is not a good example because there is no demand. Nobody gets hurt by a bug.
Let's take a few of those.

I had a couple of students that worked at Nasa. On the first space shuttle launch, they got down almost to liftoff but could not get the main pumps to turn on to "light the fire". The problem turned out to be a parallel program. Four separate computers had to initialize, run their sanity checks, and then the results were compared and had to match. They had matched thousands of times in testing. On the day of the first launch. They did not. And they would not through several resets. They found the bug, but it took some time. And if you know NASA, it had been tested almost exhaustively before "showtime". On Apollo 11, the flight control computer on the LEM crashed repeatedly during the descent to the moon. Neil Armstrong had the daunting task of landing with no nav computer data to help. Fortunately, he was up to the task. So NASA could not produce bug-free code.

GCC has produced buggy code many times. As has Apple's CLANG, as has microsoft's visual C++. There are plenty of publicly available bug reports for all three. I found/fixed some in gcc when I started to use long long.

My local bank, very large chain, had a software bug that hid certain transactions from the end user. They found/fixed it. There have been a few of those with credit cards, etc. Three weeks ago, my wife stopped by the bank to withdraw cash from the ATM. Thing took her card, processed the transaction, gave her a receipt and her card back. No cash. She checked the account from her phone and it showed the money had been withdrawn. She went inside, after a couple of hours where they went out and counted the cash in the machine, they verified their total was off by the amount of her withdrawal and it was fixed. No bugs?

We all know about Boeing here of late. They have found bug after bug after bug in their 737 max software. Which DID cause a crash. Which killed over 300 people. No bugs? Same in the US F35 fighter jet. Hardly a good example you chose here.

This discussion has been about the claim that bug-free complex code is doable within reasonable time-frame (IE maybe 1 month to 5 years, not 5000 years). It doesn't matter how the bug will affect end users. IE I would hope that heart pacemakers are thoroughly tested. Since they use computers, bluetooth, etc. Fortunately the code is actually small enough that it can be tested enough to be ALMOST 100% safe. That chess programs don't cause anybody to be hurt doesn't matter. The people that use it for analysis might disagree if the program gives them a bad move in a correspondence game and they lose because of it.

So again, all of your examples are bad because all are incorrect due to the examples given above. So back to square one. You keep claiming compilers are 100% correct. As someone who has worked on them many times, that is not just wrong, it is incredibly wrote. Bugzilla might open your eyes.
Those anecdotes prove nothing. One anecdote about your wife at an ATM doesn’t scale to all financial software products. Maybe there was a paper jam. Maybe the bank staff thought it better not to have an argument, so they shelled out. I got shelled out once by my bank when I told them my cash withdrawal was 20 GBP down, they didn’t count anything, they just okay, we believe you and gave me 20.
Banks actually seem pretty robust, clearly there are risks in the operations of capital markets, and risk is built in to fractional reserve banking, but adding up all the electrons seems the least of their problems.
Appeals to authority don’t prove anything either. Authority is often wrong, and step progress wouldn’t be possible if authority was always right, as showed Einstein and Newton before him, just for starters.
What you haven’t done is demonstrate chess engines have crossed over into a level of complexity where there will always be bugs. Squaring the number of if statements or whatever it was and generating some large number was smacks of desperation, I have to say. Chess is a small bounded system with fixed rules. You can test everything, you can do. Didn’t think I’ld be teaching an American about can do.
Again you miss the point. Someone says "there is no example of this... (whatever this is)." ALL it takes is one example where it does exist to make that statement false. I gave an example of a serious failure for every specific software package he mentioned. Hence NONE of the examples were valid, because ALL of them had exhibited failures. How many banks have had their systems hacked, account numbers and credit card numbers copied? Isn't writing software that doesn't provide the level of security claimed a software bug? Certainly seems so to me.

Why is it always MY responsibility to prove something I claim when YOU provide zero proof of any kind? In the case of Crafty, with 2^543 possible paths through the main engine, I would certainly claim THAT can not be adequately tested. Software engineering 101. BTW that is not "squaring the number of if statements." This is simple math. If you have one if, there are two paths, one with c1 true and one with c1 false. If you have two if statements, you have 4 possibilities. c1 is false and c2 is false, then c1 is false, c2 is true... obviously 2 ^ number of if statements. And again, this is something discussed in ANY software engineering book. So how is quoting a specific and well-known number "desperation?" If we were discussing the area of a circle, would mentioning Pi be desperation? That statement makes absolutely zero sense.

BTW you are not "teaching an american about can-do." BTW you just had a software failure over there. Look up Heathrow software glitch for details. NO software is immune to bugs. None at all.

Finally "anecdote": a short amusing or interesting story about a real incident or person. The things I quoted were ALL "real events" although I doubt any victim families of the boeing software glitch would find any of this amusing.

This is a simple concept. To prove something exists only requires ONE example that shows that. No more. To prove something doesn't exist requires that ALL possible things are examined and no example is found. A BIG difference in effort. Hence that testing finds bugs, but says nothing about their absence.

Is this REALLY that hard to understand. If the two of you give examples, that I can easily refute, am I the one that is at fault? It certainly seems so.

I gave a formula E = 2 ^ number of if statements. I don't have any particular number about the lower bound on E that guarantees no bugs. But it would not be a very large number. Just 32 if-tests gives 4 billion possible paths. 4 billion is not doable for testing cases.

The last SE book I used gave a pretty interesting example. IBM used Rational Rose (software development environment) to develop a software program that had an amazingly low number of bugs detected after distribution to end users. And they had stacked the deck by looking at ALL IBM employees and choosing the best of the best to work on the project. So the very best programmers, using a high quality software development environment, could NOT produce a bug-free program. If they could not, who could? The programmers you were using? :) Please. If there were no bugs in chess programs, all we would see would be version 10, then version 11, etc. No 10.1, 10.1.1, 10.1.2. On my apple watch we are up to 6.1.3, on my iPhone, we are up to version 13.3.1, on my MacBook, we are up to MacOS version 10.15.3. The very idea of no bugs is refuted by evidence every where you look. Just because software doesn't crash does not mean it has no bugs. Just because it doesn't produce an illegal move doesn't mean it has no bugs. Only one thing will prove the absence of bugs, complete testing of all paths with all possible data values. Computationally intractable and getting worse each year as software progresses. And even software reuse, partially the result of object oriented programming has produced its own set of amusing quotes by famous people. "For software reuse to work, the software must be usable first." I've given up on that idea. I still occasionally find a glitch in the C library, which is a sort of old version of reuse. I don't want to inhered bugs in code that I didn't write because those bugs are much harder to find and fix if I don't understand the code I didn't write.

Need I say more? Ed gives examples, I give each example of his a refutation that anyone can confirm. I gave you a formula for path testing which is getting closer to eliminating all bugs. You want some magic number N which says software below N can be bug free, software above N is not. Not exactly easy to define "n" and nobody has tried. We could probably test 4 billion test sets realistically. That will cover a program with just 32 if statements. That doesn't sound very realistic. Just my Search() code has 84 if statements. 2^84 is not possible. Evaluate() has almost 200 if statements. 2^200 is way out there.

A single day has 86,400 seconds. A year has about 32M seconds. 64 if conditions gives basically 10^19 pathways. doing 1 test per second would take 576 billion years. 1000 tests per second cuts that to 576 million years. one test per microsecond cuts that to 576 thousand years. And this is just for 64 if statements. So, how many if statements does it take for you to say "this is complex code?" 64? sounds pretty simple to me, but from the above, completely impossible to exhaustively test to prove the absence of bugs. To get reasonable time, the number of if statements probably needs to drop down to 32 or so which gives us 4 billion test cases which is doable. I am not going to try to pick a number N here because it seems completely irrelevant. Anything > 32 is computational challenging. Anything > 64 is computationally intractable. Anything > 128 is computationally impossible for all time.

All you have on your side is a feeling. On my side I have thousands of people that understand software development / engineering that are saying bug-free is only a concept that will never be reached. Backed up by billions of lines of code with plenty of bugs. A commonly used number is that there is one bug for every 1K lines of code in RELEASED software. IE it has already been tested to death. And the fly in that ointment is that is the DETECTED number of bugs, not the actual total. A serious problem exists. And it can't be waved away with "a foreign manager that waved his hands and eliminated all bugs."
bob
Posts: 20943
Joined: Mon Feb 27, 2006 7:30 pm
Location: Birmingham, AL

Re: hash collisions

Post by bob »

Aw hell. Bad example. My "pacemaker" comment suggested they might be close to being bug free. Then I found this: Mar 19, 2019 - Medtronic Recalls Over 150,000 Pacemakers for Software Problem. And this: Aug 31, 2017 - Almost half a million pacemakers have been recalled by the US Food and Drug ... we're proactively addressing common topics to further advance the security. That was, therefore, a sucky example. But it simply bolsters my claim about bug-free software. We have airplanes crashing due to software bugs, people dying from faulty pacemakers, There is no "clean" field of software engineering that I can identify.
bob
Posts: 20943
Joined: Mon Feb 27, 2006 7:30 pm
Location: Birmingham, AL

Re: hash collisions

Post by bob »

Rebel wrote: Fri Feb 21, 2020 11:04 pm 1. Waving with your credentials among your peers is comparable with Neil deGrasse Tyson (to name a someone) doing so in the presence of giants like Richard Dawkins, Lawrence Krauss etc. Pointless and patronizing.

2. Unless I misunderstand, you pretend it's impossible to write 100% bug free code. I think you are wrong.

3. The compiler I use has a few glitches in the interface, it never produced faulty code (executables).

1) I am not waving MY credentials. I am waving the credentials of folks including one from your back yard, Dijkstra.

2) You are correct, except that (a) I don't pretend, I know because I actually read/study what software engineers discover and write about, and I believe what they say since they ALL say the same thing, and (b) it is not "I" but all of those active in software engineering that are making this claim.

3) so you use little of the c compiler (IE no OOP, no templates, no things like a function that calls a function that returns a pointer to a function). That is, you use the simplest parts of the compiler, which WERE probably tested pretty well, and claim that shows there are no bugs in the compiler you are using? Think about it. I don't use anywhere near all of the basic C language, yet I have encountered a number of compiler bugs where a production compiler produced a broken executable.