The Grand Chess Tree

Discussion of chess software programming and technical issues.

Moderator: Ras

Sapling
Posts: 9
Joined: Sun Oct 13, 2024 7:31 pm
Full name: Tim Jones

The Grand Chess Tree

Post by Sapling »

I wanted to share with you all the latest project I've been working on.

It's called The Grand Chess Tree and it's a public / distributed perft, my initial goals are to fill in the perft results table on the Chess programming wiki to at least depth 13 with the current CPU bound algorithm, before switching to a GPU approach and seeing how far I can push it.

It's all open source on Github and I'm hoping to find others interested in this project who want to collaborate / contribute compute resources to the search.

I'm also considering additional chess related tasks that can be distributed, things like proof games, table bases and more!
jefk
Posts: 886
Joined: Sun Jul 25, 2010 10:07 pm
Location: the Netherlands
Full name: Jef Kaan

Re: The Grand Chess Tree

Post by jefk »

this looks interesting, for me at least (and some others i guess, eg. in the ''Tromp' thread).
When inspecting the nr of mate positions vs total positions, it's not so clear anymore that the
nr of total positions is increasing much faster (than the nr of mate positions); which imo would
be a strong indication of the drawish nature of the game (comparing with eg. Connect4 (7x6)
where the nr of winning possibilities is relatively increasing much faster( as i discovered/
demonstrated some time ago)).

Further exploration might possibly indicate a pattern; note that if there would be a (forced)
win for White, then this has to occur within the max nr of moves for a chess game which
nowadays is estimated around approx six thousand (my opinion of course remains unchanged
which is that it's extremely unlikely that this can occur) Note that for a forced win this
would mean that the nr of mating positions needs to become a significant portion of the
nr of total positions, otherwise it would *not* be a forced win... (this is quite unlikely
of course, if there would be some 'boa constrictor' type of plans for White achieving
via zugzwang winning positions in the end, then imo still there are enough degrees of
freedom for Black to avoid such plans (in the beginning(*), thus maintaining a draw.

(*) we can see this already in eg. current opening theory (eg. when inspecting the 'modernized'
opening books of Thinkers Publishing and/or when looking at the 'Chinese database' (chessdb.cn)
User avatar
Ajedrecista
Posts: 2056
Joined: Wed Jul 13, 2011 9:04 pm
Location: Madrid, Spain.

Re: The Grand Chess Tree.

Post by Ajedrecista »

Hello Tim:
Sapling wrote: Mon Feb 03, 2025 8:36 pm I wanted to share with you all the latest project I've been working on.

It's called The Grand Chess Tree and it's a public / distributed perft, my initial goals are to fill in the perft results table on the Chess programming wiki to at least depth 13 with the current CPU bound algorithm, before switching to a GPU approach and seeing how far I can push it.

It's all open source on Github and I'm hoping to find others interested in this project who want to collaborate / contribute compute resources to the search.

I'm also considering additional chess related tasks that can be distributed, things like proof games, table bases and more!
Excellent site! I wish you good luck and correct results.

I updated the site each minute and the remaining times look a little chaotic to me:

Code: Select all

Updating the web each minute:

Nodes / Second   Tasks / Minute   Completed Tasks   Time Remaining
    27.9b             27               70801            18h 47m
    23.2b             26               70826            19h 29m
    14.7b             15               70841              1d
    20.9b             18               70859              1d
    19.7b             23               70882            21h 59m
    20.1b             24               70906            21h 3m
Are the tasks of equal or similar size between them? If that, may I suggest a linear, smoother way to estimate the remaining time? Like here:

Re: Help us make Cute Chess GUI better.

Code: Select all

n: number of tasks of the whole level.
x: number of completed tasks at time t.
t: current elapsed time after x completed tasks.
T: estimated total time.
T - t: estimated remaining time.

T = (n/x)*t; T - t = (n/x - 1)*t

-----------------------

Example: suppose that the current elapsed time was 2880 minutes = 48 hours at 70801 completed tasks. Then:

Completed Tasks   Time Remaining
     70801        (101240/70801-1)*2880 ~ 1238.18 minutes ~ 20h 38' 11"
     70826        (101240/70826-1)*2881 ~ 1237.15 minutes ~ 20h 37' 09"
     70841        (101240/70841-1)*2882 ~ 1236.71 minutes ~ 20h 36' 43"
     70859        (101240/70859-1)*2883 ~ 1236.09 minutes ~ 20h 36' 06"
     70882        (101240/70882-1)*2884 ~ 1235.19 minutes ~ 20h 35' 11"
     70906        (101240/70906-1)*2885 ~ 1234.22 minutes ~ 20h 34' 13"
One can expect big jumps with few finished tasks, but when the progress is around 70% like at the moment I write, I should not expect those big jumps. If all the workers disconnect, the completed tasks will stall, but the current elapsed time will be raising, hence the remaining time will raise, taking care of the full disconnect. The same reasoning works with increasing or decreasing flows of workers or tasks per minute or nodes per second. The condition is that the tasks are of equal or almost equal size between them, to just avoid estimates based on instantaneous speeds, such as downloads of big files in Chrome web browser, when one can see estimates of 3 hours followed by 50 minutes followed by 6 hours, all within few seconds. This is, estimating with the known whole history instead of the last minute or whatever currently used.

Improvement: the contributors list show right now 1691.0e+12 nodes for 71.8k tasks (average of circa 2.36e+10 nodes/task), 5.0e+12 nodes for 302 tasks (average of circa 1.66e+10 nodes/task) and 2.3e+12 nodes for 61 tasks (average of circa 3.77e+10 nodes/task). I think that the formula from above holds, just being n and x nodes instead of tasks to get more realistic estimates, since we are counting nodes after all.

Regards from Spain.

Ajedrecista.
Sapling
Posts: 9
Joined: Sun Oct 13, 2024 7:31 pm
Full name: Tim Jones

Re: The Grand Chess Tree

Post by Sapling »

Thank you both very much for taking an interest!

@jefk that's really interesting, i'm just reading up on the Tromp thread - fascinating stuff. Though one can only dream of perft(6k).

I suppose the concept is controversial in the chess community considering the next reply from Ajedrecista has 'Chess will never be solved.
' in the footer.

@Ajedrecista you're spot on, the equation for estimating the time is extremely naive. It's basically just: (total_tasks - completed_tasks) / tasks_per_minute
And the tasks_per_minute is just looking at the number of tasks that finished in the last minute. I knew it was a terrible method from the start, I just put it in as a place holder for the UI. I do have a background with timeseries databases which I will integrate into to give much more accurate analytics. My original idea was to take an exponential moving average of the task duration (over the past few hours) then divide the total remaining tasks by it.

I really appreciate your suggestions / insights, I'll get a better estimate in for when calculating to depth(12)

Once I get to depth 13 if I'm still working on the GPU accelerated implementation I'll put it to work on other positions such as Kiwipete. But I'd really love to hear if the community has any other experiments to run that require a large amount of compute.
User avatar
phhnguyen
Posts: 1507
Joined: Wed Apr 21, 2010 4:58 am
Location: Australia
Full name: Nguyen Hong Pham

Re: The Grand Chess Tree

Post by phhnguyen »

Sapling wrote: Mon Feb 03, 2025 8:36 pm to at least depth 13 with the current CPU bound algorithm
Nice work. I will support you by updating CPW with your results.

Can you tell us more about your program, such as the programming language, the board presentation (bitboard?)..? What are your special codes/algorithms for Perft (using hash tables…)?

I have a quick estimate that with your current hardware, and software and based on the speed 20B NPS I have seen on your GitHub: you may need 3 years to complete Perft 13. That is so long, so boring and tiring, and expensive (dedicate at least a good computer for that period, electric bills, hardware fixes…). Are you aware and ready for those challenges?
https://banksiagui.com
The most features chess GUI, based on opensource Banksia - the chess tournament manager
jefk
Posts: 886
Joined: Sun Jul 25, 2010 10:07 pm
Location: the Netherlands
Full name: Jef Kaan

Re: The Grand Chess Tree

Post by jefk »

Sapling wrote
I suppose the concept is controversial in the chess community
well, only some of my comments may be 'controversial' (for some), not the pure research
(eg. calculating trees) itself which you are doing. Anyway the comment by Ajedrista that
chess will 'never be solved' can only applicable to a socalled 'strong solution'.
Such concepts (game solving) shouldn't be controversial, because they are
a valid topic for discussions and research eg. within the ICGA.
https://www.chessprogramming.org/ICGA
Note that the latest ICGA president (vd Herik after Schaefer, who solved checkers) expects that
chess will be 'weakly solved in a few decades (and probably in 15 years or so), so it's not relevant
what a certain 'Ajedrista' here thinks, in fact imo he should refrain from posting such a provocative
(and partly nonsensical) signature here (but that's something for the moderator).

In general, indeed in this forum i noticed a few times i can invoke some harsh
aggressive (and sometimes irrational) reactions... (which is why i did some of
my brainstorming (*) in the KG area, not in the general topics itself :wink: ).

(*) only regarding an 'ultraweak solution', this can hardly be controversial,
there had been some discussions (eg with syzygy) about the definition,
but the issue does not seem to be '100 pct -fundamentally- resolved (yet) :roll:
(whereby the indications that it's a draw with perfect play are overwhelming,
eg in top correspondence chess).

Anyway talkchess doesnt seem the right forum for an academic discussion
about such topics (game 'solving' like earlier done for checkers, and now
a research area for chess), and in general i refrain from posting ideas,
research, or new findings in this forum; but may possibly write about it
(either in academic wordings, or more popular) somewhere else.
PS whether it's 'controversial' (for some chess players or programmers) then
would not be relevant; there are a lot of things controversial in science,
eg. physics, notably the validity of string theory, the interpretation of
quantum mechanics, and much more. But a topic being 'controversial'
shouldnt be a hindrance to write about it, on the contrary maybe ... :idea:
User avatar
Ajedrecista
Posts: 2056
Joined: Wed Jul 13, 2011 9:04 pm
Location: Madrid, Spain.

Re: The Grand Chess Tree.

Post by Ajedrecista »

Hello:
phhnguyen wrote: Wed Feb 05, 2025 11:54 am
Sapling wrote: Mon Feb 03, 2025 8:36 pm to at least depth 13 with the current CPU bound algorithm
Nice work. I will support you by updating CPW with your results.

Can you tell us more about your program, such as the programming language, the board presentation (bitboard?)..? What are your special codes/algorithms for Perft (using hash tables…)?

I have a quick estimate that with your current hardware, and software and based on the speed 20B NPS I have seen on your GitHub: you may need 3 years to complete Perft 13. That is so long, so boring and tiring, and expensive (dedicate at least a good computer for that period, electric bills, hardware fixes…). Are you aware and ready for those challenges?
I agree, something like one month for Perft(12) and three years for Perft(13) with the current speed. Answering Tim's answer:
Sapling wrote: Tue Feb 04, 2025 11:25 pm[...]

[...] But I'd really love to hear if the community has any other experiments to run that require a large amount of compute.
I remember a try of distributed project for compute raw, divided Perft(14) —without counting how many nodes were captures, en passant, checks, ...—. It was not successful:

Perft helpers
Distributed Perft(14) Calculation

There were previous proofs with Perft(11) and Perft(12) to set up the platform:

Distributed Perft(11) Calculation
Distributed Perft(12) Calculation

Given the extent of the project, I think that computing the full statistics of Perft(12) is feasible [four to six weeks keeping the computing level for Perft(11)], always keeping an eye to electric bills, as said before by phhnguyen. OTOH, I consider unfeasible the next level due to the low number of contributors, do not expect something like Stockfish Testing Framework. Remebering again dollars, dollars, dollars or your currency.

By the way, I can not access right now to your GitHub repository nor the site of full statistics of Perft(11), where all tasks were completed.

Regards from Spain.

Ajedrecista.
Sapling
Posts: 9
Joined: Sun Oct 13, 2024 7:31 pm
Full name: Tim Jones

Re: The Grand Chess Tree

Post by Sapling »

Thank you @phhnguyen! I've just started writing up the technical documentation here:
https://timmoth.github.io/grandchesstree/technical/

It's a bit bare at the moment, but if you let me know any additional questions you might have it'll help guide me to fill out the important / missing bits.

WRT the current speed I've just added another layer of client side caching, which has increased the rate on three of my machines to 30bnps, I haven't tested it yet on depth 13, and in the next 20 days it's going to take to process depth 12 I'll hopefully have time to implement a few other ideas to improve caching. Even if i throw all my compute at the problem It'll still be a long time to get to depth 13 on my own. I'm hoping that I'll be able to crowd source additional compute from other interesting parties so depth 13 with the CPU implementation is largely down to others involvement. I will also be attempting a GPU implementation which should speed up things dramatically.

@jefk do you have a link to where you do post ideas / research? I'd be very interest to hear!

@Ajedrecista thanks for those resources, yeah perft(13) onwards would certainly be a huge feat.
I started perft 12 this morning (will update the ui to reflect it after I put my daughter to bed - and also start work on the improved time estimates!) But from my initial calculations it should be around 20 days or so.

For perft 13 certainly some large optimisations will be required, at least a rewrite in rust or c++ (currently using dotnet) but hopefully a CUDA implementation should make it doable even with my hardware alone.

I'm not sure why you wouldn't be able to access my github or grandchesstree.com, neither have had any changes to permissions or hosting could you try again and let me know if it's still an issue?

WebApp: https://grandchesstree.com/
Github: https://github.com/Timmoth/grandchesstree
Docs: https://timmoth.github.io/grandchesstree/
jefk
Posts: 886
Joined: Sun Jul 25, 2010 10:07 pm
Location: the Netherlands
Full name: Jef Kaan

Re: The Grand Chess Tree

Post by jefk »

sapling wrote:
jefk do you have a link to where you do post ideas / research? I'd be very interest to hear!
some of (the latest stuff) was posted here;
viewtopic.php?t=84307

before that there had been other discussions also on talkchess,
hardly worthwile to mention.
User avatar
Ajedrecista
Posts: 2056
Joined: Wed Jul 13, 2011 9:04 pm
Location: Madrid, Spain.

Re: The Grand Chess Tree.

Post by Ajedrecista »

Hello Tim:
Sapling wrote: Wed Feb 05, 2025 8:04 pm[...]


WRT the current speed I've just added another layer of client side caching, which has increased the rate on three of my machines to 30bnps, I haven't tested it yet on depth 13, and in the next 20 days it's going to take to process depth 12 I'll hopefully have time to implement a few other ideas to improve caching. Even if i throw all my compute at the problem It'll still be a long time to get to depth 13 on my own. I'm hoping that I'll be able to crowd source additional compute from other interesting parties so depth 13 with the CPU implementation is largely down to others involvement. I will also be attempting a GPU implementation which should speed up things dramatically.

[...]

@Ajedrecista thanks for those resources, yeah perft(13) onwards would certainly be a huge feat.
I started perft 12 this morning (will update the ui to reflect it after I put my daughter to bed - and also start work on the improved time estimates!) But from my initial calculations it should be around 20 days or so.

For perft 13 certainly some large optimisations will be required, at least a rewrite in rust or c++ (currently using dotnet) but hopefully a CUDA implementation should make it doable even with my hardware alone.

I'm not sure why you wouldn't be able to access my github or grandchesstree.com, neither have had any changes to permissions or hosting could you try again and let me know if it's still an issue?

WebApp: https://grandchesstree.com/
Github: https://github.com/Timmoth/grandchesstree
Docs: https://timmoth.github.io/grandchesstree/
I am able to connect again to your sites. I do not know what happened because I could browse other webs.

Your guess of 20 days for Perft(12) full statistics looks right if you were able to increase 50% the former speed, so you can save 1/3 of the former time. Level 11 was known since some years, but I am not aware about level 12, so we are closer than ever to get new data! I see that you already have completed 1687 tasks out of 101240 (circa 1.67% or 1/60 of the total, so you probably started around 8 hours ago = 20/60 of a day). :-)

Regards from Spain.

Ajedrecista.