Page 2 of 2

Re: LMR other conditions

Posted: Tue Jul 24, 2007 12:02 am
by bob
frankp wrote:I still find the history table reduces the tree size for a fixed endpoint - but maybe random numbers would......
Read Don Beal's paper about random evaluation. It produced surprising results that actually make sense when you think about how random evaluations favor positions with more mobility.

But there has to be a better way...

Re: LMR other conditions

Posted: Tue Jul 24, 2007 5:13 am
by Michael Sherwin
You make a very good point. In a 2B node 18 ply search after 6 to 9 moves by each side all the moves the history tables say are good, may be bad and visa versa. The history tables may do more harm than good in that instance.

Then what about a table for every 4 ply? That means 5 tables for a 20 ply search. When the search reaches the 5th ply, it zeros out the second table and all 5th to 8th ply moves update that table. Same when the 9th ply is reached except that the 3rd table is zeroed, etc. Two moves by each side would seem to change the average situation very little and the history info would tend to be much more accurate. Or would it?

Re: LMR other conditions

Posted: Tue Jul 24, 2007 7:16 am
by bob
Michael Sherwin wrote:You make a very good point. In a 2B node 18 ply search after 6 to 9 moves by each side all the moves the history tables say are good, may be bad and visa versa. The history tables may do more harm than good in that instance.

Then what about a table for every 4 ply? That means 5 tables for a 20 ply search. When the search reaches the 5th ply, it zeros out the second table and all 5th to 8th ply moves update that table. Same when the 9th ply is reached except that the 3rd table is zeroed, etc. Two moves by each side would seem to change the average situation very little and the history info would tend to be much more accurate. Or would it?
I don't have an answer to that, I've never tried multiple tables beyond one for black and one for white. The question is, is "4" the right number, or is there something that would be better?

It's an interesting question, but not one I am going to tackle anytime soon, I have more than enough other problems to look at. :)

Re: LMR other conditions

Posted: Tue Jul 24, 2007 7:48 am
by Michael Sherwin
I have already started the experiment! I have ripped out all history table stuff and created ten new history tables. To conserve memory and to have it compatible with what I do with the eval function, I have made them [fig][ts] ([12][64]) only. Sofar, there is a 6% drop in node rate if the search ends on an iteration that is evenly divisible by 4. That is due to zeroing the table at the last ply of normal search. Despite this drop in speed it is finishing the search sooner! I will post some results when I know more.

Re: LMR other conditions

Posted: Tue Jul 24, 2007 9:35 am
by Michael Sherwin
Michael Sherwin wrote:I have already started the experiment! I have ripped out all history table stuff and created ten new history tables. To conserve memory and to have it compatible with what I do with the eval function, I have made them [fig][ts] ([12][64]) only. Sofar, there is a 6% drop in node rate if the search ends on an iteration that is evenly divisible by 4. That is due to zeroing the table at the last ply of normal search. Despite this drop in speed it is finishing the search sooner! I will post some results when I know more.
Well, I know enough now. IMO, any history table that is not comprised of [fig][fs][ts] ([12][64][64]) is not as good for move ordering or LMR. I would even venture to say that they just are not good, period. I get far superior results with the more specific tables. Unfortunately, the larger tables are just to expensive to zero out, during the search.

The only idea left (that I have) is to record into the tables only shallow ply info of about 8 ply or less, and hope ...

Re: LMR other conditions

Posted: Tue Jul 24, 2007 11:55 am
by frankp
bob wrote:
frankp wrote:I still find the history table reduces the tree size for a fixed endpoint - but maybe random numbers would......
Read Don Beal's paper about random evaluation. It produced surprising results that actually make sense when you think about how random evaluations favor positions with more mobility.

But there has to be a better way...
Failed to find this unfortunately, although it is referenced many times on the net. I would guess that maximising the number of moves (mobility) would increases your chances up to a point.

Re: LMR other conditions

Posted: Tue Jul 24, 2007 5:31 pm
by bob
frankp wrote:
bob wrote:
frankp wrote:I still find the history table reduces the tree size for a fixed endpoint - but maybe random numbers would......
Read Don Beal's paper about random evaluation. It produced surprising results that actually make sense when you think about how random evaluations favor positions with more mobility.

But there has to be a better way...
Failed to find this unfortunately, although it is referenced many times on the net. I would guess that maximising the number of moves (mobility) would increases your chances up to a point.
That was it. A position where _you_ have more mobility gives you the opportunity to generate more random numbers, and grab a larger one. Reduced mobility positions gives you fewer opportunities to generate a larger value...

It was amazing when I first saw it, but then it began to make some sense, in a twisted way...

Re: LMR other conditions

Posted: Tue Jul 24, 2007 11:38 pm
by frankp
bob wrote:
frankp wrote:
bob wrote:
frankp wrote:I still find the history table r

That was it. A position where _you_ have more mobility gives you the opportunity to generate more random numbers, and grab a larger one. Reduced mobility positions gives you fewer opportunities to generate a larger value...

It was amazing when I first saw it, but then it began to make some sense, in a twisted way...
Reminds me of Bruce's searchlight analogy - for fast dumb searchers: the bigger angle you can sweep the searchlight through, the more chance you have of finding something.