Rodolfo Leoni wrote:carldaman wrote:Rodolfo Leoni wrote:hgm wrote:The problem that scores of a variation get lost when you go deeper in a variation, and then return to (before) a previously analyzed one, is not related to persistent hash. The latter only makes it possible to transfer analysis between different sessions with the engine.
What you complain about (and also bugged me, when I started to do thorough analysis) is caused by a bad replacement strategy for hash entries. Most engines number the searches they do, and remember in the hash table which search was responsible for creating a stored entry, or remember what was the last search that actually used the stored information. If they then stumble on entries that are not used in any recent search (sometimes even just the current search), they assume the positions they refer to no longer are reachable from the current root position, and thus useless. They are treated as empty, and immediately overwritten.
This works very well in games, where no take-backs are done. Such as in the games used to determine the rating of the engine. For interactive analysis this is fatal, however: as soon as you go into an alterative variation, and play an irreversible move, all your previous analysis will be purged from the hash table, as the positions are no longer reachable. But of course they remain reachable, through the take-back you will sooner or later do to back-poropagate the score of the newly analyzed variation.
The solution is quite simple: engines should realize that an interactive analysis session is in fact a single search, where the low plies are governed by the user, and the root positions of the various analysis searches are not root positions at all, but just internal nodes of the tree the user is constructing. As such, they should not increment the search number during analysis. They are just not different searches, but part of the larger one.
In my engines incrementing the search number only before searches intended to play in a game ('thinking'), and leaving it as it was in an analysis search, did solve the problem.
Well, you say what I said before, you just say it better.
BTW, are you still developing Joker? I've been away these last years and I need some updates about what works are still in progress.
About technical solution, I have no other reference than Critter example. Richard Vida built Critter session file as a separate, sizeable hash. With this solution he avoided complicated hash handlings and, most of all, file is updated every time a move is made on the board. That means one can resume an analysis when he wants, as all of his wokr was saved. It uses both forward and backward (2 plies) propagation when a score drop is detected. Not a perfect system, but the best I've ever seen.
The problem is, Critter could be 200 ELOs weaker than Komodo and fishes... That's why I'm asking if some top engine programmer is interested in developing this feature.
You'd think this [correct score propagation in the hash] was a top priority for commercial engine developers. I mean, so many people buy an engine for analysis purposes, so why wouldn't they provide an analysis-friendly engine? It makes no sense, other than to surmise that game-playing Elo is mostly everything to them. No wonder we're still in dark ages when it comes to good analysis features. [I do like IDEA, but it's a resource intensive hog of a program, and I prefer to interact with the analysis as much as possible]
As seen above, HGM is aware of the problem, and he even proposed an elegant solution; Richard Vida offered a good solution with his *free* (but now old) engine. If freeware programmers can offer improved analysis 'tools', why won't the top engine developers do the same or better, especially when they charge for their engines and they should 'owe' it to their customers?
Just wondering 'out loud'...
Regards,
CL
Hi Carl,
I agree 100% with your clear and direct opinion. The choice is between self-oriented engines and user-oriented ones. It seems the direction everybody took is the fight for gaining few ELOs per year, and that's absolutely self-oriented. But if we thing an human being will use the engine, we should consider what's most important for him.
To become user-oriented could be next step of engines evolution...
Hi Rodolfo,
you've nailed the nail on its head with your last reply, so to speak.
I have already mentioned the idea of a learning capability for Komodo, over the last couple of years, in this forum, and the authors said they'd consider it. I even brought up the Critter and Stockfish implementations as examples. No sign of such a useful feature, yet, unfortunately.
Now, I get it that adding any new feature would take resources away from improving the engine's base strength, but such new features could be bundled into a 'Pro' version of Komodo, perhaps more tweakable, that could be sold at a premium. Then the extra effort would pay off and many customers would be very satisfied.
Houdini, another commercial program, used to have a 'Learning' feature, but in Houdini 5 it was taken OUT (!) How do you figure that? This is the face of 'progress' in computer chess nowadays: expect very little else other than strict Elo gains - Elo is everything!
CL