The future of computer chess

Discussion of anything and everything relating to chess playing software and machines.

Moderator: Ras

User avatar
mclane
Posts: 18937
Joined: Thu Mar 09, 2006 6:40 pm
Location: US of Europe, germany
Full name: Thorsten Czub

Re: The future of computer chess

Post by mclane »

I said this many times before:

The TI99/4a has a 16 BIT CPU but a very weak chess engine „Videochess“.
It came out too learly to have a better chess engine.

But there is hope.
Someone tries to make a stronger chess engine for the TI.
His engine is written in UCSD Pascal.

https://github.com/wmaalouli/Phoenix-Chess

Maybe some others would like to help making the engine little stronger on those limited resources.

The idea is to beat “video chess” and maybe also some other 8 bit engines that run on 6502/z80 cpu.
Phoenix Chess

A chess engine for the TI 99/4A computer written in UCSD Pascal

I have long been disappointed by the performance of Video Chess on the TI 99/4A which was the only professionally released chess program on the TI. There was a user port of the original Sargon program, but it was actually inferior to Video Chess overall. Initially, the idea to write my own engine was very daunting, but as my computing skills improved over the decades, it became more approachable. In March of 2015, I launched the project with UCSD Pascal as the main programming environment. At that time my proficiency in UCSD Pascal was fairly limited and my research of chess programming was bare bones, so after nearly a couple of years my results were dismal and I never got to a truly functional version. So I gave up, but the germ of that project never truly left me and kept bugging me until October 2024 when I decided to give it another shot. By that time, my UCSD Pascal proficiency was far more advanced including the use of assembly language modules, and I deeply delved into the theory of chess engines. The end result was the current version of Phoenix Chess, which at long last consistently beats Video Chess at the latter's highest level.

This is still considered a work in progress. The primary limitation remains available memory which does not allow the program to go beyond a ply depth of 5. The secondary limitation is speed of execution which is excrutiatingly slow, making the program virtually unplayable unless massively accelerated via emulation. This unfortunately is related to the fact that UCSD Pascal compiles to intermediate p-code instead of assembly language, making it quite slow compared to the latter. So why did I pick UCSD Pascal in the first place? It's because of its extremely powerful capabilities in terms of units which allow for the creation of massive programs that can run within a very limited memory space, as well as the flexibiltiy of the data structures which is essential for a chess program. Phoenix Chess is currently hovering around 4300 lines of code, and yet can run within the 32K RAM of the TI along side the OS!

A final distinction of Phoenix Chess: it's the only chess engine for legacy early personal computers from the late 70's which uses the concept of 64-bit bitboards to represent the game, thanks to the TI's 16-bit TMS 9900 CPU which allows for the direct manipulation of 16-bit words.
You can run all this or program it from the TI Emulators, e.g. on Windows systems the emulator is called classic99,
on linux machines you can run it on the emul99 emulator which can be overclocked very good depending on your system.


Vorticon, as the nick name of the programmer is, told about his project beginning in
2016 in the following forum:

https://forums.atariage.com/topic/24842 ... nix-chess/
What seems like a fairy tale today may be reality tomorrow.
Here we have a fairy tale of the day after tomorrow....
User avatar
mclane
Posts: 18937
Joined: Thu Mar 09, 2006 6:40 pm
Location: US of Europe, germany
Full name: Thorsten Czub

Re: The future of computer chess

Post by mclane »

If anyone is interested in the sources (39 kb zipped) in a zip file, please let me know.
What seems like a fairy tale today may be reality tomorrow.
Here we have a fairy tale of the day after tomorrow....
User avatar
mclane
Posts: 18937
Joined: Thu Mar 09, 2006 6:40 pm
Location: US of Europe, germany
Full name: Thorsten Czub

Re: The future of computer chess

Post by mclane »

By looking into the code it seems one could implement many things and enhance the search. E.g. iterative deepening, alpha beta.
Would the program profit from that ?
What seems like a fairy tale today may be reality tomorrow.
Here we have a fairy tale of the day after tomorrow....
User avatar
mclane
Posts: 18937
Joined: Thu Mar 09, 2006 6:40 pm
Location: US of Europe, germany
Full name: Thorsten Czub

Re: The future of computer chess

Post by mclane »

I want to comeback to one of my favourite targets: the TI99/4a.

The 16 bit machine that came to early.

Only 1 commercial chess engine: videochess from 1979. That was WEAK.
That is (for a 16 bit cpu) a shame.

But this will be changed.
A homebrew amateur engine is in the making.
What seems like a fairy tale today may be reality tomorrow.
Here we have a fairy tale of the day after tomorrow....
lkaufman
Posts: 6260
Joined: Sun Jan 10, 2010 6:15 am
Location: Maryland USA
Full name: Larry Kaufman

Re: The future of computer chess

Post by lkaufman »

towforce wrote: Fri Apr 19, 2024 8:00 pm
mclane wrote: Fri Apr 19, 2024 7:35 pm
Roughly speaking, a linear increase in elo has required an exponential increase in hardware and software resources.
But a linear increase in elo is an exponential increase in results; every 400 elo is defined as a 10 to 1 ratio of score! So it's more accurate to say that increase in hardware and software translates directly to a corresponding score ratio, though perhaps it's not exactly that a doubling of speed gives a 2 to 1 score ratio, could be some power involved, but I bet it is close to a direct proportion.
Komodo rules!
User avatar
towforce
Posts: 12598
Joined: Thu Mar 09, 2006 12:57 am
Location: Birmingham UK
Full name: Graham Laight

Re: The future of computer chess

Post by towforce »

towforce wrote: Fri Apr 19, 2024 8:00 pm
mclane wrote: Fri Apr 19, 2024 7:35 pm
Roughly speaking, a linear increase in elo has required an exponential increase in hardware and software resources.
But a linear increase in elo is an exponential increase in results; every 400 elo is defined as a 10 to 1 ratio of score! So it's more accurate to say that increase in hardware and software translates directly to a corresponding score ratio, though perhaps it's not exactly that a doubling of speed gives a 2 to 1 score ratio, could be some power involved, but I bet it is close to a direct proportion.

For clarity: Mr Kaufman wrote the above response to mclane, not myself.
Human chess is partly about tactics and strategy, but mostly about memory
smatovic
Posts: 3374
Joined: Wed Mar 10, 2010 10:18 pm
Location: Hamburg, Germany
Full name: Srdja Matovic

Re: The future of computer chess

Post by smatovic »

towforce wrote: Tue Nov 04, 2025 10:32 am
towforce wrote: Fri Apr 19, 2024 8:00 pm
mclane wrote: Fri Apr 19, 2024 7:35 pm
Roughly speaking, a linear increase in elo has required an exponential increase in hardware and software resources.
But a linear increase in elo is an exponential increase in results; every 400 elo is defined as a 10 to 1 ratio of score! So it's more accurate to say that increase in hardware and software translates directly to a corresponding score ratio, though perhaps it's not exactly that a doubling of speed gives a 2 to 1 score ratio, could be some power involved, but I bet it is close to a direct proportion.

For clarity: Mr Kaufman wrote the above response to mclane, not myself.
Nope, he messed up the quotes, but it was a response to towforce:
towforce wrote: Fri Apr 19, 2024 8:00 pm [...]
Roughly speaking, a linear increase in elo has required an exponential increase in hardware and software resources.
Take a look at the thread by clicking on the small arrow up in the quote headline.

--
Srdja
User avatar
towforce
Posts: 12598
Joined: Thu Mar 09, 2006 12:57 am
Location: Birmingham UK
Full name: Graham Laight

Re: The future of computer chess

Post by towforce »

lkaufman wrote: Tue Nov 04, 2025 3:48 am
towforce wrote: Fri Apr 19, 2024 8:00 pmRoughly speaking, a linear increase in elo has required an exponential increase in hardware and software resources.
But a linear increase in elo is an exponential increase in results; every 400 elo is defined as a 10 to 1 ratio of score! So it's more accurate to say that increase in hardware and software translates directly to a corresponding score ratio, though perhaps it's not exactly that a doubling of speed gives a 2 to 1 score ratio, could be some power involved, but I bet it is close to a direct proportion.

OK - time for a bit of fun with exponentials! :)

Bill has $10,000 in a fund that pays 1% interest per year, and Brian has $5,000 in a fund that pays 3% interest every year. How many years before Brian's fund is of the same value as Bill's?

1. Bill's fund is twice the value of Brian's
2. The difference in interest rate is 2%

So the answer is log(2)/log(1.02) = 35 years. Quick proof:

10000*1.01^35 = $14,166
5000*1.03^35 = $14,069

You can calculate log(2)/log(1.02) more accurately if you want the two above numbers to match exactly.

The point of that was to show that the difference between two different exponential curves is itself exponential - but thinking it through, that's not relevant: an exponential curve will always become a straight line on a chart on which the Y axis is logarithmic, and the argument is that an Elo number is a logarithmic representation of chess engine capability.

However - we now know that the Elo chart has an upper limit of approximately 4500 Elo. At around this level, a player becomes unbeatable. So if we drew a chart of engine Elo against time, I'd actually expect to see an "S" shaped curve, not an exponential one.
Human chess is partly about tactics and strategy, but mostly about memory
jkominek
Posts: 74
Joined: Tue Sep 04, 2018 5:33 am
Full name: John Kominek

Re: The future of computer chess

Post by jkominek »

towforce wrote: Tue Nov 04, 2025 8:00 pm
lkaufman wrote: Tue Nov 04, 2025 3:48 am
towforce wrote: Fri Apr 19, 2024 8:00 pmRoughly speaking, a linear increase in elo has required an exponential increase in hardware and software resources.
But a linear increase in elo is an exponential increase in results; every 400 elo is defined as a 10 to 1 ratio of score! So it's more accurate to say that increase in hardware and software translates directly to a corresponding score ratio, though perhaps it's not exactly that a doubling of speed gives a 2 to 1 score ratio, could be some power involved, but I bet it is close to a direct proportion.
However - we now know that the Elo chart has an upper limit of approximately 4500 Elo. At around this level, a player becomes unbeatable. So if we drew a chart of engine Elo against time, I'd actually expect to see an "S" shaped curve, not an exponential one.
In the early issues of the ICGA Journal authors often discussed the "technology curve" of chess programs, graphically plotted as rating vs. search depth in plies. Back when the experiments were extended to 8, or - gasp! - 9 ply, the relationship being measured was inside the near-linear domain of the logistic function. So, they thought the relationship was linear, computed the best-fit slope, and argued/speculated about diminishing returns. You are right that, big picture, the technology curve is S-shaped -- what I'd describe as a distorted logistic curve.

Here are some results from node doubling experiments I've conducted using the Chess 324 subset of Fischer 960, as proposed by Larry Kaufman as an attractive medium-small test suite. Plotted are five versions of Stockfish. For familiar numbers the y-axis has been calibrated to the CCRL 40/20 list (currently renamed 40/15).

Image

I realize that running games with a fixed node budget per move discards the benefits of time management, and that work per node has changed by a factor of around 2x over the version history. But it makes for a solid baseline measurement that can be replicated when running single-threaded, and results run on difference machines can be combined without the hassle and inexactitude of speed calibration.

The maximum slope has increased over the version history revealing major improvements in software. For Stockfish 1.31 it is a max Elo gain of 100 per node doubling. For Stockfish 16 is is 320 (at very low node counts).

Code: Select all

              Engine  MaxGain  Percent ScoreRatio
    ----------------  -------  ------- --------
        Stockfish 16    319.2    86.26    6.281
        Stockfish 11    229.0    78.89    3.737
        Stockfish 05    149.7    70.30    2.367
      Stockfish 2.11    119.3    66.52    1.987
      Stockfish 1.31     99.8    63.98    1.776
Larry is correct about the (by construction) relationship between Elo and score ratios, i.e. 400 Elo <--> 10:1 odds. Thus adding Elo differences is multiplicative in performance ratios. But it is not linear throughout, and there is no single scaling factor between Elo differences and equivalent time odds. And as you point out there is an upper limit to measured performance. You suggest 4500. I wouldn't go so far as to say that "we" know that. Much depends on the testing methodology. With a UHO book 4500 may be a reasonable expectation. With Chess 324 I see a limit very close to 4000 (wrt my anchor point). The CCRL 40/15 limit is lower because they use balanced books.

Why is there an asymptote at all? It's because chess is a relatively short finite game. When the average error rate of a pair of engines drops to around 1 mistake per 1000 moves, games in such a pairing becomes replete with draws. As a thought experiment imagine games a million moves long. Also imagine that a TB32 oracle studies the output and assigns a loss to the first engine that makes a theoretical mistake. With million move games there is room for additional rating classes to stack on top the Elo scale.

One thing I particularly like about my performance curve plot is that it provides a quick visual impression of the impact of hardware speed versus software improvements. We can use as rule of thumb that computers double in speed every two years. So the jump from 2^16 nodes to 2^26 nodes covers a march of around 20 years. Likewise 2^26 to 2^36. If I could extend the graph that far out, that band covers roughly 1985 to 2025.

Notwithstanding the argument that there would be little to no software improvements without hardware improvements, when separated out the improvement from software is greater than that from hardware (speed) alone. Much greater! Compare for example the purple trajectory of SF1 to the vertical line at 2^16 nodes that passes from SF1 to SF16.