bob wrote:I am just following standard software development practice, which says to spend time optimizing the parts of the program that consume the most compute cycles, _not_ the ones that consume fractions of a percentage point. This has always been, and will always be the sane way to optimize programs...
That is only a relevant criterion in so far the differences in execution time are caused by the various parts being executed a different number of times. Not if they are executed equally often, and the time difference is caused by the code section being much larger.
I have no idea why you don't get that.
Could the fact that it is not true perhaps have something to do with it?
Fortunately, most of the rest of the programming world does. Feel free to show me a quote from any programming book that would suggest rewriting and debugging a procedure that takes 0.5% of tht total execution time, when there are other procedures that take 30% and up...
Ah, so for you wisdom comes from books? For me it flows from brains. That might also explain the difference.
The "regardless of the investment" does make one assumption, that you want to improve your program the most, for the least effort.
Seems to me that that is exactly the assumption it does _not_ make. If you don't equate 'investment' to 'effort', then what does 'investment' mean for you? "least effort" puts conditions on the effort, while "regardless of effort" dismisses any such conditions there might be. But feel free to explain how your understanding of the English language differs from mine...
If that is not your goal, then the conversation has taken a twist I am not interested in. Chances are good that a couple of hours spent to rewrite and debug a 0.5% piece of code would pay off far more spent modifying a 50% piece of code. It is just common sense (to me at least)...
"Chances are"... But prudent people don't leave such things to chance. They apply their effort where the _know_ it will pay off most! If that is not common, too bad. But it definitely makes more sense.
It does not require a calculus degree to see that starting with a near-infinitely massive job, just because it gives slightly bigger returns than an easy job, is a self-defeating strategy, which slows down your engine development immensely. So I am not going to argue this point any further.
I would hope not, as the advice I gave has been given for years now in books discussing optimization principles. You can go off into never-never-land whenever you want, ...
The more relevant question is: can you get out of it?
but advising anyone to spend time working on a 1/2 percent usage module over one that is much larger is just silly. And poor advice. No matter how you cut it. Everyone understands Amdahl's Law. And almost everyone uses it in making their decisions on what to optimize and what to leave as is. If you don't want to follow that, that's fine. But it is the _right_ way to improve a program's speed, rather than trying to speed up parts that won't affect the final time at all...
It of course depends a bit on your definition of 'right'. Apparently you consider it 'right' because it is in books. But I judge it merely on the basis of how much speed improvement I get per mhour of invested programming work. And by that standard it seems very _wrong_...
Even if you speed it up an infinite amount, at a cost of only one hour total, that is still an hour that was wasted as you can't measure a 0.5% speed improvement in terms of increased playing strength, when compared to a small improvement in a 50% module (my evaluation).
Yes, but that of course is due to the fact that your evaluation is not quantitative, and little more than gut feeling. If the 'small improvement' in the 50% module reduced its profile time from 50% to 49.5%, and it had taken you 2 hours to implement it, you would just have wasted an hour compared to the one who went for the infinite improvement of the 0.5%. Only the ratio speedGained / timeSpent determines what is smarter to do. In themselves the quantities mean _nothing_.
Why do you try to take indefensible positions like this and carry on an argument that is foolish to anyone that actually writes software???
The question is more why _you_ do that...
As for the SMP stuff: Joker is not SMP, but this discussion came up here before. The conclusion then was that copying data structures on thread start-up was not really a competative method, and that it is more efficient to have each thread update its own data structures based on as hort list of moves passed to it. And then the issue becomes just the same as for update during the normal search.
They are not the same at all. I only need to copy a small fraction of the repetition list. I need to duplicate the entire repetition hash table for each thread that searches at any split point. It isn't free. And since it is hardly clear that the hashing approach is any faster at all, why waste time rewriting something that will hurt when the SMP search is developed, and there is no tangible return in terms of speed either???
Why do you need to duplicate the entire repetition hash table at every split point? As explained above, I certainly would not do that. I would only duplicate it at the root, (together with the board and piece list), and then differentially update the lot by only entering the positions in it on the branch between the root and the split point. There are plenty cases where this would involve fewer entries than that you have to copy. (Namely when the branch has no irreversible moves in it, and the game history ended with a large number of reversible moves as well).