Improving history tables

Discussion of chess software programming and technical issues.

Moderators: hgm, Rebel, chrisw

yoshiharu
Posts: 56
Joined: Sat Nov 11, 2006 11:14 pm

Re: Improving history tables

Post by yoshiharu »

hgm wrote: But I think the usefulness of history comes about by the fact that cut moves that are the result of specific mistakes are different all the time (depending on what the mistake was) and thus never collect a high history value. Those moves that are good against almost any move of the opponent do collect far higher history values.

The power of the history heuristic is that you don't have to worry about the underlying reason for the cutoff. Dumb counting plus statistics do the job.
Yesterday's discussion triggered my thoughts as well as Michael's.
In a PVS framework, collecting history scores in every node couldn't be a source of randomness? After all, when searching with a null window the moves can only fail (either high or low), maybe not being particularly deserving (or bad). Wouldn't be reasonable to update history scores only in the part of the tree made by PV nodes?
hgm wrote: Moves that are generally bad, so that they are not able to give a cutoff in most positions (e.g. because they give up control of a square of vital tactical importance) never collect a high history score, so the history heuristic also protects you from inadvertantly trying such a move early.
But how many of those moves gain a good history score just because the previous move was a horrible blunder, and/or maybe the position at that node is (like the majority of the positions in a search tree) totally pointless and semi-random?

This is a question to which I find hard to give a quantitative answer.

Cheers, Mauro
bob
Posts: 20943
Joined: Mon Feb 27, 2006 7:30 pm
Location: Birmingham, AL

Re: Improving history tables

Post by bob »

I think that is the crux of the problem. Fail highs are very "local" in nature most of the time, as in refuting a blunder. So counting them leads to large numbers in the history table yet those numbers don't mean anything except in the local area of the tree where they were set.

On the other hand, counting only PV nodes (in a PVS framework) has exactly the opposite problem. There are so few PV nodes, particularly when things are going well with move ordering, that there would be no useful information in the history counters either.

Hence my decision to stop using them. I even kept up with when a supposedly good move didn't fail high, but that did not seem to help much either.

Avoiding reducing killer moves gets the same flavor into the search, but with data that is considerably more "local" and therefore probably more accurate.
mjlef
Posts: 1494
Joined: Thu Mar 30, 2006 2:08 pm

Re: Improving history tables

Post by mjlef »

I think one problem I have with history information based only on failing high or low is it does not capture inofrmation about the score. At one point in the tree a move might fail high with a score of lets say -500. Is this useful in another part of the tree which needs a +300 to fail high? Perhaps a moving average of the fail high or fail low score can be kept to give an estimate of the average score a move yielded. If a move typically gives say a-300 score, it is less likely to be useful in causing a fail high that needs to be +300. Has anyone tried this approach? Just storing an average score returned from a search? Maybe it would be less random. I might try this tonight, and if it works great, I promise never to tell anyone! :-)

Mark
Michael Sherwin
Posts: 3196
Joined: Fri May 26, 2006 3:00 am
Location: WY, USA
Full name: Michael Sherwin

Re: Improving history tables

Post by Michael Sherwin »

Given what everyone has said here, how about only storing those fail highs that are within a small margine of beta? I guess that failsoft alpha beta would need to be used?
If you are on a sidewalk and the covid goes beep beep
Just step aside or you might have a bit of heat
Covid covid runs through the town all day
Can the people ever change their ways
Sherwin the covid's after you
Sherwin if it catches you you're through
mjlef
Posts: 1494
Joined: Thu Mar 30, 2006 2:08 pm

Re: Improving history tables

Post by mjlef »

Actually, I am trying to figure out how to extract the most info I can from the search of a move. Of course, most will just be limts and not tell you the true score. But if a move fails above beta, you can probably assume there is a decent probability the same move might again have a score of at least beta in the future. And if it is below alpha, maybe it is safe to assume it would again return a score at least close to alpha. Also, moves leading to mate scores should probably be handles differently. They might again lead to mate, and so should probably not be reduced for a while. I am still experimenting with various ways to use this info. It seems to me that is you have afail soft score, it must be more useful than just saying it is >beta (one bit of info). Extracting the usefulness might be tricky, but I am trying some simple things, like a moving average score of the returned value of a search. Maybe that will be better than simple counting.
YL84

Re: Improving history tables

Post by YL84 »

hi,
in my chess programme I store in "history tables":
- "scores" that improves best_score found so far (score is stored);
- "scores" that improves beta (score is stored);
- "scores" that improves beta but are not a capture (score+300 is stored)
(this one is a kind of killer move in the history table)
Note that my history tables depend on the depth, so they are not of a classical type.
It is the best combination I have found by trial and error experiments. It works for my programme, and could not work for others (for instance i have no transposition table).
I also have tried many things like taking into account "bad moves"
(storing score-something) but never succeded to find something good.
This methods are not perfect since most of the time we don't know the exact score, so there should be some place for improvements.

Yves
Michael Sherwin
Posts: 3196
Joined: Fri May 26, 2006 3:00 am
Location: WY, USA
Full name: Michael Sherwin

Re: Improving history tables

Post by Michael Sherwin »

You are using the history tables as a kind of super killer table. It sounds that you have an always replace scheme. By using the score entry as a score accumulator and adding a count variable (if you already do not have one) then you would be doing, what Mark has suggested, and keeping a running average for each move. I wonder if you have tried that. I wonder how differentiated the averages would remain.
If you are on a sidewalk and the covid goes beep beep
Just step aside or you might have a bit of heat
Covid covid runs through the town all day
Can the people ever change their ways
Sherwin the covid's after you
Sherwin if it catches you you're through
YL84

Re: Improving history tables

Post by YL84 »

Super Killer tables seems to be a good description of what is inside my code, since my tables are depth dependent. There is a counter for each move which is also depth dependent. So that the mean "score" for each move is known at any time (even if it's not an accurate score most of the time).
This mean score is used for move ordering in all cases (good captures, normal moves,...). I have found some time ago that doing that was the
best in my particular case (forgive me guys :) ).
The question is "is it better than normal history tables?". For my case the answer is yes, but I suspect it is not a general rule. The reason could be I have no "transposition tables", and no more "killer moves".
Yves
YL84

Re: Improving history tables

Post by YL84 »

Also I tried to add constant scores depending on distance with beta and/or alpha, but found no improvements (should visit again this possibility); the most funny thing is I have no idea of how the "mean scores" evoluate during the search, and how they are different from other ones (should look at it). Also I'm obliged to test if the "total score" is not close to the maximum limit of 32 bit integers (using 64 bits integer to prevent this test did not help); if the score is too high or too low, it is kept constant.
Yves
ed

Re: Improving history tables

Post by ed »

Here are my 2 cents to the topic....

I am using a [piece_type][TO square] history table since I added history reductions to my engine.

I have been experimenting with the (classic?) [piece_type][FROM square][TO square] and it is obvious inferior to the above formula.

The strange thing about formula-2 is that it makes my engine a lot faster (28%) and in testsets it behaves as good as formula-1 but in real life (games) it scores considerable worse, -3%.

So maybe some of you would like to try the [piece_type][TO square] approach.

Ed