Don wrote:Daniel Shawul wrote:Don wrote:Daniel Shawul wrote:
I think in computer chess a lot of different things were re-invented and tried by various people. If you have an "new" idea by your own, it is likely somebody else had this idea before, but of course there is the matter of implementation nuances and who things interact. It is often difficult to credit the one who first published something in a paper or forum post.
In Rexchess we "invented" pruning based on beta, similar to null move pruning but we obtained a bound by looking to see what was directly attacked. We never told anyone about this. We did 2 or 3 ply of this at the end and it was our secret weapon. However, I think others may have already done something similar and I think null move pruning was being discovered about this time although it was not generally known about and we did not know about it then.
There is a bunch of ideas that are common knowledge or have been done for years but nobody believes in them or takes them seriously until it shows up in a really strong program.
I agree with you that it is difficult. Re-inventions are usually mentioned in literature when they initiate the popularity
of the existing algorithm with some modifications. The case we have at hand b/n Levy's LMR & original HP has
signifcant differnces IMO: use of history data, absence or "weak use of" the lateness of move property in the
original HP implementation makes it a different algorithm or atleast deserves it a re-invention status because
Levy's LMR was not working in the past. Shannon type B
You'll never know exactly what commercial programs do, since their programmers rarely publish concrete information. That is remarkable on Levy's et al. 1989 paper, a contribution from commercial programmers.
That is too bad they don't publish often but we should also not assume there is too much new ideas
in their programs. In other physical sciences I know of, usually the software developers for simulation
rely too much on basic research done elsewhere. Good programers do the programming better than anyone else.
This is for regarding both capturing the physics of the problem and invention of algorithms for simulation.
I believe that means you used SEE to look for threats at a possible fail high node.
No, we did not use SEE because it is not global. We looked for ALL threats and we did not do a detailed analysis other than a superficial check - a rook attacking a defended pawn was not considered a threat but we only covered a few superficial cases that were fast and cheap. What we were interested in knowing is if ANY piece might be profitably attacked and what the pieces was. We did not try to analyze the exact swap off value of the attacked piece because that is unreliable anyway. I think we took 5/8 of the value of that attacked piece since there is almost always a way to recover SOME of your material. For example if a queen is attacked by a bishop, it's very likely the queen will go down kicking and screaming and get something in return (certainly it will get the bishop attacking it.)
This is stronger than using eval() alone like FHR does, but weaker than using qsearch or
reduced depth search like null move does.
We have found that this is superior to null move if it's limited to the last 2 ply of the search. On 3 and above, null move is superior. The routine in komodo does see() but on every piece that might be attacked. It's like a real cheap null move and superior to just using big margins.
There are many variations of the prunings/reductions
possible by making combinations of the list I made in my other post.
This is also a variation of the null move observation (NMO) like most of the pruning done
at fail high nodes are. Fail high reductions, standard null move and stand pat in qsearch all use NMO.
I think (but am not sure) nobody understood this until null move pruning made it embarasingly obvious
by doing a null move literally.
Daniel
That is also what I had in mind, global board swap. I tried it also about 7 years ago when
I had attack tables computed in eval, as described in Ed schroeder's page. Using that ,SEE
can be pre-computed and used cheaply in a lot of places. So I used : if eval() minus a factored value of our highest enprized piece is
greater than beta last 2 plies ,then prune. It worked back then.
2nd option:
I rember also galaurng used to do global board swap not in eval for that purpose. It seems expensive to do that though if used only last 2 plies.
I had it as a side effect of using attack tables but an associated cost of slower eval.
3rd option:
I also tried to be inventive and use standard null move last 2 plies but with _lazy eval_ only. This is like doing
a safer board swap as material only was considered in eval. I thought that since my eval was bulky, it is would be a good compromize to use ligher weight eval last two plies and
be safer by using null move. The result from null move last two plies was not stored in tt to avoid that.
This worked for me while I had a bulky eval. Now I do full eval there too when I removed those bulky attack tables.
Edit : Were you also aware of the Null move observation (NMO) at that time ? I can't seem to find enough information about it.
Daniel