Delaying Extensions Idea (does anyone do this)?

Discussion of chess software programming and technical issues.

Moderators: hgm, Rebel, chrisw

mjlef
Posts: 1494
Joined: Thu Mar 30, 2006 2:08 pm

Delaying Extensions Idea (does anyone do this)?

Post by mjlef »

OK, here is my next screwy idea: Try to order the moves you search based not just on the likelihood of them being found best, but also based on how big a sub tree they will generate.

In cutoff nodes, many moves can often lead to a cutoff. So, instead of say searching a capture move which might cause an extension (like a recapture or check), delay searching those moves. I think for this to work you still need to use good move ordering. For example, searching winning captures first. But why search a capture that is also a check (or extended for other reasons) if a not-extended move might still fail high.

Has anyone played with this idea? I will start some tests and see if it works.

I thin an efficient search need not search the best moves. An efficient search will search "good enough" moves with small subtrees. What do you think?

Mark
mambofish

Re: Delaying Extensions Idea (does anyone do this)?

Post by mambofish »

There is an idea in here which was the subject of a paper by Donald Michie I think. (Unfortunately, I lent my copy of David Levy's 'Computers in Chess' to somebody and never got it back, but the paper was reproduced in there. If anybody in the forum has a copy they could check the authorship and cite the reference).

As I recall, the idea was to steer the program (when playing a human) into playing moves generating bushy subtrees where the human has much more chance of going wrong, even if a "stronger" but more obvious move was possible. Depending on the strength of the human opponent a factor would be introduced that favoured the complexity of the resulting position over small material gains - or simplifying moves such as a sequence of captures and recaptures.

I don't know if anybody ever tried to implement such a scheme, and I imagine it would be pretty useless against other computers.
mjlef
Posts: 1494
Joined: Thu Mar 30, 2006 2:08 pm

Re: Delaying Extensions Idea (does anyone do this)?

Post by mjlef »

mambofish wrote:There is an idea in here which was the subject of a paper by Donald Michie I think. (Unfortunately, I lent my copy of David Levy's 'Computers in Chess' to somebody and never got it back, but the paper was reproduced in there. If anybody in the forum has a copy they could check the authorship and cite the reference).

As I recall, the idea was to steer the program (when playing a human) into playing moves generating bushy subtrees where the human has much more chance of going wrong, even if a "stronger" but more obvious move was possible. Depending on the strength of the human opponent a factor would be introduced that favoured the complexity of the resulting position over small material gains - or simplifying moves such as a sequence of captures and recaptures.

I don't know if anybody ever tried to implement such a scheme, and I imagine it would be pretty useless against other computers.
I understand...but my idea was not to try and create complex positions against humans. My idea is to try first searching fail high moves with smaller sub trees, and make the search more efficient.

Let me give an example. Lets say you are at an expected fail high node, where you expect at least one move to return from its subtree with a score above beta. In many, many positions, more than one move can fail high. Lets say you had three "winning" capture moves. One happened to also be a checking move, and in most programs would get extended, producing a bigger subtree. Another might be a recapture, which might also cause an extension. The final capture is just a regular capture, with no extensions. If the expected gain from all captures appears to be enough to cause a fail high, doesn't it make sense to search the capture that does not cause an extension first and have a smaller subtree.

Typically, each extension increases the size of the subtree by your branching factor (now down to about 2 for LMR programs). So this idea might save a lot of searching.

Anyway, that is the idea. If anyone implements this, let me know. I think an efficient implementation would need to probably keep roughly the same move ordering (grouping winning captures first, then killers, then non captures, then losiing captures). But in each group, moves which are not going to be extended could be put first (like in the non-captures, delay searching checking moves, for example, or in the losing captures, delay the capture checks to the very end of the list).

Mark
Jose Carlos
Posts: 151
Joined: Wed Mar 08, 2006 10:09 pm
Location: Murcia (Spain)

Re: Delaying Extensions Idea (does anyone do this)?

Post by Jose Carlos »

mjlef wrote: Anyway, that is the idea. If anyone implements this, let me know. I think an efficient implementation would need to probably keep roughly the same move ordering (grouping winning captures first, then killers, then non captures, then losiing captures). But in each group, moves which are not going to be extended could be put first (like in the non-captures, delay searching checking moves, for example, or in the losing captures, delay the capture checks to the very end of the list).
Mark
What about storing in the hash table, along with the habitual information, the number of nodes of the last search of this move?
Well, not exactly move, but position instead. Then you'd need to calculate the hash key for every succesor of the current position in order to sort the moves.
Anyway it looks easy to implement. If I have the time (most probably I won't :() I'll try it and report the results.
__________________________
José Carlos Martínez Galán
mjlef
Posts: 1494
Joined: Thu Mar 30, 2006 2:08 pm

Re: Delaying Extensions Idea (does anyone do this)?

Post by mjlef »

José Carlos wrote:
mjlef wrote: Anyway, that is the idea. If anyone implements this, let me know. I think an efficient implementation would need to probably keep roughly the same move ordering (grouping winning captures first, then killers, then non captures, then losiing captures). But in each group, moves which are not going to be extended could be put first (like in the non-captures, delay searching checking moves, for example, or in the losing captures, delay the capture checks to the very end of the list).
Mark
What about storing in the hash table, along with the habitual information, the number of nodes of the last search of this move?
Well, not exactly move, but position instead. Then you'd need to calculate the hash key for every succesor of the current position in order to sort the moves.
Anyway it looks easy to implement. If I have the time (most probably I won't :() I'll try it and report the results.
I looked at that idea a few months ago. I did not want to store the number of nodes, bacause it could be a lot and take up a bunch of room in the hash table. I figured storing the highest bit of the number of nodes might be good enough (factor of 2, so a 130 node subtree would get a 7 (2^7=128) stored, a 1345 node tree a 10 (2^10 = 1204) etc. But it must be adjusted for the remaining depth of the search. My plan was to scale (divide) that based on something like 2^depth. The idea is the lower the number, the more efficient a move is in causing a cutoff. Only cutoff nodes would get stored in this index. PV and fail low nodes would get a 256 or something big. Then when ordering moves, you just use the index (smaller is a smaller tree in this case). It so happend my hash scheme has an unused byte, so this would fit well.

Mark
Jose Carlos
Posts: 151
Joined: Wed Mar 08, 2006 10:09 pm
Location: Murcia (Spain)

Re: Delaying Extensions Idea (does anyone do this)?

Post by Jose Carlos »

Another possibility would be to calculate the branching factor under this node, thus getting a small, depth independent, value. But I'm not sure it'd be a good idea to ignore depth this way. The bigger the draft of the hash node, the more reliable the information it provide is.
Anyway, testing should have the last word, as usual.
__________________________
José Carlos Martínez Galán
mjlef
Posts: 1494
Joined: Thu Mar 30, 2006 2:08 pm

Re: Delaying Extensions Idea (does anyone do this)?

Post by mjlef »

José Carlos wrote:Another possibility would be to calculate the branching factor under this node, thus getting a small, depth independent, value. But I'm not sure it'd be a good idea to ignore depth this way. The bigger the draft of the hash node, the more reliable the information it provide is.
Anyway, testing should have the last word, as usual.
That is exactly why I was dividing by 2^depth, which is a rough estimate of the average subtree size. Actual nodes/2^depth would be a good estimate of the relative size of a subtree, but scaled to be independent of depth. That value could be put into something like a history table, only in this case, moves would be picked based on the relative "smallness" of the subtree or past moves that failed high. As part of this table, real depth could as a replacement criteria...replacing deeper search values with less shallow ones.

Whether this would lead to overall small trees, it is hard to say, but it is simple enough to be worth a try. I can do some test runs tonight.

Mark
mjlef
Posts: 1494
Joined: Thu Mar 30, 2006 2:08 pm

Re: Delaying Extensions Idea (does anyone do this)?

Post by mjlef »

I just tried this formula:

score:=constant1-constant2*(subtreenodes) shr depth;
(shr is Pascal's version of >>)

And used this score instead of the history values for move ordering, and it did pretty well in fixed depth searches, generally saving at least 10% of nodes depending on the position. Note I only used this on non-capture moves, since that was very easy to try in my current program. Probably capture moves would benefit more. But maybe not..I suspect most capture sub trees are smaller since they take at least one piece off the board. Also, I used this down to depth=1, which might be bad since the subtree then has only captures and some checks. I will keep experimenting and let you know what I find.

Mark
Jose Carlos
Posts: 151
Joined: Wed Mar 08, 2006 10:09 pm
Location: Murcia (Spain)

Re: Delaying Extensions Idea (does anyone do this)?

Post by Jose Carlos »

mjlef wrote:I just tried this formula:

score:=constant1-constant2*(subtreenodes) shr depth;
(shr is Pascal's version of >>)

And used this score instead of the history values for move ordering, and it did pretty well in fixed depth searches, generally saving at least 10% of nodes depending on the position. Note I only used this on non-capture moves, since that was very easy to try in my current program. Probably capture moves would benefit more. But maybe not..I suspect most capture sub trees are smaller since they take at least one piece off the board. Also, I used this down to depth=1, which might be bad since the subtree then has only captures and some checks. I will keep experimenting and let you know what I find.

Mark
It sounds promising and has the advantage of beeing simple. I'll also test it in my program when I have time.
__________________________
José Carlos Martínez Galán
User avatar
Daniel Mehrmann
Posts: 858
Joined: Wed Mar 08, 2006 9:24 pm
Location: Germany
Full name: Daniel Mehrmann

Re: Delaying Extensions Idea (does anyone do this)?

Post by Daniel Mehrmann »

mjlef wrote:OK, here is my next screwy idea: Try to order the moves you search based not just on the likelihood of them being found best, but also based on how big a sub tree they will generate.

In cutoff nodes, many moves can often lead to a cutoff. So, instead of say searching a capture move which might cause an extension (like a recapture or check), delay searching those moves. I think for this to work you still need to use good move ordering. For example, searching winning captures first. But why search a capture that is also a check (or extended for other reasons) if a not-extended move might still fail high.

Has anyone played with this idea? I will start some tests and see if it works.

I thin an efficient search need not search the best moves. An efficient search will search "good enough" moves with small subtrees. What do you think?

Mark
Howdy Mark :)

Well, i don't played with your idea and i think your idea is already outdated with "modern" PVS searches. Let me explain my point of view.
"Modern" searches, like based on LMR/PHR, doing a lot of researches while the searchis running, or at at least they should do it. :lol:

However, if we entering a node again, we have a hashmove already from the previous search, which should lead to a cutoff.

If i'm looking back, good old times, your idea might giving a nice improvment in special cases of threads and pins.

Best,
Daniel