Programming issue: engine speed slowing over time

Discussion of chess software programming and technical issues.

Moderator: Ras

User avatar
flok
Posts: 558
Joined: Tue Jul 03, 2018 10:19 am
Full name: Folkert van Heusden

Re: Programming issue: engine speed slowing over time

Post by flok »

Hai wrote: Fri Jul 01, 2022 4:28 pm
flok wrote: Fri Jul 01, 2022 8:27 am
dangi12012 wrote: Thu Jun 30, 2022 9:06 pm Its 100% not a windows issue. Same as its 100% not a compiler bug. Its everything else and the sun before any of those become true.
How do you know that?
That’s simple.
Its 100% not a windows issue because I’m using a macbook.
Then it can still be a hardware issue.
Or not neccessarily an issue, but works as designed (e.g. throttle the speed when the cpu gets too hot).
Chessnut1071
Posts: 313
Joined: Tue Aug 03, 2021 2:41 pm
Full name: Bill Beame

Re: Programming issue: engine speed slowing over time

Post by Chessnut1071 »

dangi12012 wrote: Fri Jul 01, 2022 12:17 am
Chessnut1071 wrote: Thu Jun 30, 2022 10:00 pm
20-ply hangs

First, why aren't 16-plys and under effected? No degradation at all under 18-ply. Everything appears to constant until 18-ply 72,000,000 nodes visited.
If you send me the source I could take a look.
daniel.infuehr@live.de
The consensus seems to think it's the Garbage Collector, which kicks in around 60 - 70,000,000 nodes into the engine solution. After 60,000,000 nodes the engine speed gradually decreases of from 250,000/sec to under 300/sec about 20,000,000 nodes later. The mystery is that that degradation doesn't seem to happen with the other bitboard engine, but, only with the PEXT engine, and they're both using the same methods and static arrays.

I tried to disable the GC; however, it's part of the CLR and not under control by the program directly. I notice that many developers from other projects are having similar issues, but, none of the C++ developers seem to complain about this issue. If you are working on a very long solution and using C# Net 6, are you experiencing similar degradation? Also, do you know of any tricks to keep the GC from rearing its ugly head?
User avatar
Ras
Posts: 2696
Joined: Tue Aug 30, 2016 8:19 pm
Full name: Rasmus Althoff

Re: Programming issue: engine speed slowing over time

Post by Ras »

Chessnut1071 wrote: Fri Jul 01, 2022 10:09 pmAlso, do you know of any tricks to keep the GC from rearing its ugly head?
Easy: avoid any dynamic memory allocation (i.e. on the heap) within the search tree.
Rasmus Althoff
https://www.ct800.net
dangi12012
Posts: 1062
Joined: Tue Apr 28, 2020 10:03 pm
Full name: Daniel Infuehr

Re: Programming issue: engine speed slowing over time

Post by dangi12012 »

Chessnut1071 wrote: Fri Jul 01, 2022 10:09 pmI tried to disable the GC; however, it's part of the CLR and not under control by the program directly. I notice that many developers from other projects are having similar issues, but, none of the C++ developers seem to complain about this issue. If you are working on a very long solution and using C# Net 6, are you experiencing similar degradation? Also, do you know of any tricks to keep the GC from rearing its ugly head?
If you are using new at all during search - you need to have NO dynamic allocations happening inside your search.
Worlds-fastest-Bitboard-Chess-Movegenerator
Daniel Inführ - Software Developer
Chessnut1071
Posts: 313
Joined: Tue Aug 03, 2021 2:41 pm
Full name: Bill Beame

Re: Programming issue: engine speed slowing over time

Post by Chessnut1071 »

Ras wrote: Fri Jul 01, 2022 10:45 pm
Chessnut1071 wrote: Fri Jul 01, 2022 10:09 pmAlso, do you know of any tricks to keep the GC from rearing its ugly head?
Easy: avoid any dynamic memory allocation (i.e. on the heap) within the search tree.
Thank you, thank, thank you, that was the problem. Now, I feel stupid. I didn't realize that hash was growing that fast and thought I had a limit on it. If only I had more memory. My kingdom for another terabyte! thx again, I haven't been a sleep for 36 hours.
User avatar
lithander
Posts: 915
Joined: Sun Dec 27, 2020 2:40 am
Location: Bremen, Germany
Full name: Thomas Jahn

Re: Programming issue: engine speed slowing over time

Post by lithander »

so in the end it was what I was suggesting in the very first reply to this thread? :?
Minimal Chess (simple, open source, C#) - Youtube & Github
Leorik (competitive, in active development, C#) - Github & Lichess
Chessnut1071
Posts: 313
Joined: Tue Aug 03, 2021 2:41 pm
Full name: Bill Beame

Re: Programming issue: engine speed slowing over time

Post by Chessnut1071 »

lithander wrote: Sat Jul 02, 2022 2:44 am so in the end it was what I was suggesting in the very first reply to this thread? :?
It was related to the Zobirst Hash table but that table wasn't the issue. When I got to 18-ply and higher, my gigantic TT had a huge number of collisions on the same key, which made me search though thousands of keys in the bucket. There weren't that many hits under 18-ply so I wasn't aware of the problem. Now, I zero out the key bucket when it gets over 150 collisions.

So, you suggest 10MB for the TT table size? I'm currently using a 240,000,000 8-byte ulong array and it seems too small. How do you do it in 10MB?????

thx to everyone who helped, this was driving me bananas.
User avatar
hgm
Posts: 28353
Joined: Fri Mar 10, 2006 10:06 am
Location: Amsterdam
Full name: H G Muller

Re: Programming issue: engine speed slowing over time

Post by hgm »

So it was what I suggested: inadequate design of hash-table replacement. Buckets of thousands is absurd. Even buckets of 1 (i.e. always-replace) would be far better.
Chessnut1071
Posts: 313
Joined: Tue Aug 03, 2021 2:41 pm
Full name: Bill Beame

Re: Programming issue: engine speed slowing over time

Post by Chessnut1071 »

hgm wrote: Sat Jul 02, 2022 8:06 am So it was what I suggested: inadequate design of hash-table replacement. Buckets of thousands is absurd. Even buckets of 1 (i.e. always-replace) would be far better.
Are you sure we're referring to the same thing? There's not enough memory for a hash table than can accommodate all positions without collisions. So, some keys have to share duty and that list grows the deeper the search. There are special cases where replacement will bury the solution like some double move, ent passant. When you have a dozen different positions all using the same key, how do just replace instead of a stack? Are you referring to a Zobirst Hash table? I'd be interested to see how you handle collisions?
User avatar
hgm
Posts: 28353
Joined: Fri Mar 10, 2006 10:06 am
Location: Amsterdam
Full name: H G Muller

Re: Programming issue: engine speed slowing over time

Post by hgm »

Since, as you say, the hash table cannot be made large enough to hold every position in a long search, one only keeps the positions that are most important, and overwrites those that are less important to make room for them. It turns out that most important positions are recently first-visited positions. Because these have the highest chance to be visited again in the very near future. A secondary priority is to keep entries resulting from a deep search. As these would save you very much effort when they are visited again, and supply the result of the search immediately.

With a large table the 'always-replace' scheme already works satisfactorily: you resolve any collision by just overwriting what is already there. So never rehash, never stack. Only probe one entry in the table; it either has the same key (hit), or it has a different one, and you overwrite it with the new result. This tends to keep the most recent results in the table.

A slightly refined scheme tries to give some protection to entries from a high-depth search, so they are not as easily overwritten as those from a d=0 or d=1 search. In this scheme you have buckets of 2, so you can resolve the first collision without overwriting anything. When a third position collides, you overwrite the entry with the lowest depth of those that already were there (no matter whether that depth is larger than that of the new entry). This way the highest depth that ever mapped into that bucket would be preserved forever.

Note that it is no big loss if occasionally two entries of high depth collide. This happens very rarely (as such entries themselves are already rare). When it does happen, one of the two gets 'quickly' lost by overwriting with a d=0 or d=1. Say you lose the result of a d=10 search that way. So during the search you don't get a hash hit that you would otherwise had gotten, and have to repeat the d=10 search. But most likely all d=9 daughters are still in the hash table, and will now give hash hits. So in fact you are only doing a d=1 search.