Equidistributed draft

Discussion of chess software programming and technical issues.

Moderators: bob, hgm, Harvey Williamson

Forum rules
This textbox is used to restore diagrams posted with the [d] tag before the upgrade.
Post Reply
Cardoso
Posts: 293
Joined: Thu Mar 16, 2006 6:39 pm

Equidistributed draft

Post by Cardoso » Fri Jul 14, 2017 6:11 pm

Hi,
I have been experimenting with the equidistributed draft idea from H.G.Muller.
Unfortunately I couldn't make it work efficiently in my checkers engine.
It allways gives much higher node counts than the classic implementation of depth preferred plus allways replace.
At regular intervals I print the hash_histogram[] array, just to make sure the differents depths get aprox. even counts, at first the lower depths counts are much higher but over time, they gradually get even, so this indeed shows the equidistributed idea in fact implemented correctly. But the node counts are much higher, sometimes 2x or even 3x.

I have a bucket of 4 entries.
The first entry is used for the depth preferred criteria.
The remaing 3 entries are used for the equidistributed draft criteria.

What I'm doing exactly is this:
1 - Scan the 4 entries of the buckt for an empty entry, replace that if found.
2 - Scan only the first entry of the bucket for aged (previous search) OR draft<current depth, replace that if any of those conditions are true.
3 - Scan the last 3 entries of the buckt for the highest draft_histogram[entry draft], replace the entry with the higher counter.

I tested using 2 very low hash sizes to force a high hash loading factor:
1 - only 1Mb of hash (16 bytes per entry gives 64k entries).
2 - 16Mb = 1 million entries.

My test positions usually go over 100million nodes searched, sometimes over 1Billion, and I do no hash the qsearch.

Any ideas on what I'm doing wrong?

Thanks,
Alvaro

User avatar
Evert
Posts: 2923
Joined: Fri Jan 21, 2011 11:42 pm
Location: NL
Contact:

Re: Equidistributed draft

Post by Evert » Sat Jul 15, 2017 7:11 am

Cardoso wrote:Hi,
I have been experimenting with the equidistributed draft idea from H.G.Muller.
Unfortunately I couldn't make it work efficiently in my checkers engine.
It allways gives much higher node counts than the classic implementation of depth preferred plus allways replace.
At regular intervals I print the hash_histogram[] array, just to make sure the differents depths get aprox. even counts, at first the lower depths counts are much higher but over time, they gradually get even, so this indeed shows the equidistributed idea in fact implemented correctly. But the node counts are much higher, sometimes 2x or even 3x.

I have a bucket of 4 entries.
The first entry is used for the depth preferred criteria.
The remaing 3 entries are used for the equidistributed draft criteria.

What I'm doing exactly is this:
1 - Scan the 4 entries of the buckt for an empty entry, replace that if found.
2 - Scan only the first entry of the bucket for aged (previous search) OR draft<current depth, replace that if any of those conditions are true.
3 - Scan the last 3 entries of the buckt for the highest draft_histogram[entry draft], replace the entry with the higher counter.

I tested using 2 very low hash sizes to force a high hash loading factor:
1 - only 1Mb of hash (16 bytes per entry gives 64k entries).
2 - 16Mb = 1 million entries.

My test positions usually go over 100million nodes searched, sometimes over 1Billion, and I do no hash the qsearch.

Any ideas on what I'm doing wrong?
I wonder if nodes/sec is the correct metric. The answer is probably obvious, but I'll ask anyway: what about time-to-depth?

EDIT: another question is whether you can expect a similar efficiency for Chess and Draughts. I never looked at it, but I could imagine that tree behaviour is a bit different.

Cardoso
Posts: 293
Joined: Thu Mar 16, 2006 6:39 pm

Re: Equidistributed draft

Post by Cardoso » Sat Jul 15, 2017 5:10 pm

I wonder if nodes/sec is the correct metric. The answer is probably obvious, but I'll ask anyway: what about time-to-depth?

EDIT: another question is whether you can expect a similar efficiency for Chess and Draughts. I never looked at it, but I could imagine that tree behaviour is a bit different.
When I said higher node count I didn't mean KNs, I mean total node count after the search was done for each position.
About tree behaviour on checkers, you could be right about that.

Pio
Posts: 119
Joined: Sat Feb 25, 2012 9:42 pm
Location: Stockholm
Contact:

Re: Equidistributed draft

Post by Pio » Sat Jul 15, 2017 5:21 pm

I sent you a pm

/Pio

User avatar
hgm
Posts: 23713
Joined: Fri Mar 10, 2006 9:06 am
Location: Amsterdam
Full name: H G Muller
Contact:

Re: Equidistributed draft

Post by hgm » Sun Jul 16, 2017 11:55 am

This is a bit different from how I did it, and I don't fully fathom how it would work. My hash tables usually have a separate always-replace slot, because I often have strange numbers of slots in a bucket (like 5, sometimes even 7). If I use 5, the primary hit (determined by the key) goes to one of the first four (or six), which are organized as two (or three) pairs. Where it only goes to the lowest draft of the pair (if the primary hit was not obsolete, empty or over-represented) if it has better depth. Otherwise it goes to the always-replace slot. The draft stats ignores the always-replace slot.

The entries in the always-replace slots will always be dominated very much by the lowest drafts, as the search produces those most. This is even exaggerated by the fact that the depth-preferred entries try to filter out the higher drafts. In Chess 85% of the nodes can be QS nodes (d=0).

In your scheme the last-written entry in each bucket has the role of the always-replace slot; it will likewise be dominated by the low depths. This makes it likely that the very lowest draft will always dominate the table. There just is no way to get it down below its natural frequency by favoring its replacement. So it will almost always be the lowest draft in the bucket that also has the largest occurrence, and thus will be replaced This means hardly anything will be done to equalize the occurrence of the higher drafts.

What I did was ignoring the always-replace slot when calculating the draft histogram, and treat an entry as 'aged' (even if it was from the same search) when the occurrence of its draft was larger than that of the lowest draft. If you start with an empty table initially it will be filled with each draft in proportion to its frequency in the search, i.e. highly dominated by the lowest draft. But then depth-preferred replacement starts to eat away that lowest draft in favor of higher drafts. But as soon as the higher drafts overtake it, they become the preferred replacement candidates, trimming them back to the size of the lowest draft.

Post Reply