TCEC S29 division P started
Moderator: Ras
-
Damir
- Posts: 2933
- Joined: Mon Feb 11, 2008 3:53 pm
- Location: Denmark
- Full name: Damir Desevac
Re: TCEC S29 division P started
It has nothing to do with hardware, but architecture. Lc0 is simply poorly designed, to perform, compared to Stockfish.
-
Norm Pollock
- Posts: 1087
- Joined: Thu Mar 09, 2006 4:15 pm
- Location: Long Island, NY, USA
Re: TCEC S29 division P started
S29-divisionP was a lively and interesting tournament.
Some observations:
The 2 promoted engines from S29-League1, Reckless and Torch, finished #2 and #5.
The 2 promoted engines from S28-League1, PlentyChess and Integral, finished #4 and #6.
Perennial super-engines Stockfish and LcZero finished #1 and #3.
I don't understand how PlentyChess had 2 engine crashes early on, and then was fine.
The number of decisive games (wins) increased, indicating more bias in the openings. All in all, there were 91 wins in 224 games, but really 89 in 222 (40.1%) since there were 2 crashes by PlentyChess. And importantly, White had all the wins, Black had zero wins.
The finals should be great!
Some observations:
The 2 promoted engines from S29-League1, Reckless and Torch, finished #2 and #5.
The 2 promoted engines from S28-League1, PlentyChess and Integral, finished #4 and #6.
Perennial super-engines Stockfish and LcZero finished #1 and #3.
I don't understand how PlentyChess had 2 engine crashes early on, and then was fine.
The number of decisive games (wins) increased, indicating more bias in the openings. All in all, there were 91 wins in 224 games, but really 89 in 222 (40.1%) since there were 2 crashes by PlentyChess. And importantly, White had all the wins, Black had zero wins.
The finals should be great!
-
Jouni
- Posts: 3868
- Joined: Wed Mar 08, 2006 8:15 pm
- Full name: Jouni Uski
-
Dann Corbit
- Posts: 12869
- Joined: Wed Mar 08, 2006 8:57 pm
- Location: Redmond, WA USA
Re: TCEC S29 division P started
If you are discussing millions of operations per Elo generated, then you are definitely correct.
On the other hand I know some people who have very high end GPUs and LC0 on those platforms is amazing, even tactically. My poor old 2080 Super GPUs can't begin to compete with those, even given a huge time headstart.
Once the memory sharing architecture used on the commercial GPUs becomes available on the consumer GPUs, we will see a new sort of power radiating from GPU powered chess programs.
Taking ideas is not a vice, it is a virtue. We have another word for this. It is called learning.
But sharing ideas is an even greater virtue. We have another word for this. It is called teaching.
But sharing ideas is an even greater virtue. We have another word for this. It is called teaching.
-
lucario6607
- Posts: 53
- Joined: Sun May 19, 2024 5:44 am
- Full name: Kolby Mcgowan
Re: TCEC S29 division P started
Memory sharing architecture?Dann Corbit wrote: ↑Fri Mar 27, 2026 2:01 amIf you are discussing millions of operations per Elo generated, then you are definitely correct.
On the other hand I know some people who have very high end GPUs and LC0 on those platforms is amazing, even tactically. My poor old 2080 Super GPUs can't begin to compete with those, even given a huge time headstart.
Once the memory sharing architecture used on the commercial GPUs becomes available on the consumer GPUs, we will see a new sort of power radiating from GPU powered chess programs.
-
Dann Corbit
- Posts: 12869
- Joined: Wed Mar 08, 2006 8:57 pm
- Location: Redmond, WA USA
Re: TCEC S29 division P started
The AMD architecture allows transparent access to the video RAM and system RAM to both the CPUs and GPUs. They have already implemented it for their AI workhorse type GPUs. I think Nvidia is doing something similar. Maybe Srdja can comment.
The big problem with commodity GPUs is that they spend all their time copying work to and from video RAM
The big problem with commodity GPUs is that they spend all their time copying work to and from video RAM
Taking ideas is not a vice, it is a virtue. We have another word for this. It is called learning.
But sharing ideas is an even greater virtue. We have another word for this. It is called teaching.
But sharing ideas is an even greater virtue. We have another word for this. It is called teaching.
-
smatovic
- Posts: 3648
- Joined: Wed Mar 10, 2010 10:18 pm
- Location: Hamburg, Germany
- Full name: Srdja Matovic
Re: TCEC S29 division P started
Hehe, I am meanwhile out of the GPGPU loop. Do not read the white papers anymore. There is AMD Infinity Fabric, there is Nvidia NVLink, there is CXL over PCIe. If you want to share the memory, you have to connect the processing units to it somehow. Apple offers impressive memory bandwith for CPUs, and it has a true unified memory architecture across CPU/GPU/NPU, now wonder if we consider that the M-series is basically a SoC.Dann Corbit wrote: ↑Fri Mar 27, 2026 8:04 am The AMD architecture allows transparent access to the video RAM and system RAM to both the CPUs and GPUs. They have already implemented it for their AI workhorse type GPUs. I think Nvidia is doing something similar. Maybe Srdja can comment.
The big problem with commodity GPUs is that they spend all their time copying work to and from video RAM
--
Srdja
-
smatovic
- Posts: 3648
- Joined: Wed Mar 10, 2010 10:18 pm
- Location: Hamburg, Germany
- Full name: Srdja Matovic
Re: TCEC S29 division P started
Haha, I am already outdated, you were right, AMD and others are working on UALink and Ultra Ethernet, the former for scale up the latter for scale out for HPC/AI.smatovic wrote: ↑Fri Mar 27, 2026 8:54 amHehe, I am meanwhile out of the GPGPU loop. Do not read the white papers anymore. There is AMD Infinity Fabric, there is Nvidia NVLink, there is CXL over PCIe. If you want to share the memory, you have to connect the processing units to it somehow. Apple offers impressive memory bandwith for CPUs, and it has a true unified memory architecture across CPU/GPU/NPU, now wonder if we consider that the M-series is basically a SoC.Dann Corbit wrote: ↑Fri Mar 27, 2026 8:04 am The AMD architecture allows transparent access to the video RAM and system RAM to both the CPUs and GPUs. They have already implemented it for their AI workhorse type GPUs. I think Nvidia is doing something similar. Maybe Srdja can comment.
The big problem with commodity GPUs is that they spend all their time copying work to and from video RAM
--
Srdja
https://en.wikipedia.org/wiki/UALink
https://ualinkconsortium.org/
https://ultraethernet.org/
--
Srdja
-
Werewolf
- Posts: 2086
- Joined: Thu Sep 18, 2008 10:24 pm
- Full name: Carl Bicknell
-
lucario6607
- Posts: 53
- Joined: Sun May 19, 2024 5:44 am
- Full name: Kolby Mcgowan
Re: TCEC S29 division P started
Leela isn’t memory bandwidth bottlenecked. It does matter but not as much as you think it does. Leela sends work to each gpu, so nvlink really doesn’t do much.Dann Corbit wrote: ↑Fri Mar 27, 2026 8:04 am The AMD architecture allows transparent access to the video RAM and system RAM to both the CPUs and GPUs. They have already implemented it for their AI workhorse type GPUs. I think Nvidia is doing something similar. Maybe Srdja can comment.
The big problem with commodity GPUs is that they spend all their time copying work to and from video RAM