It's Fabien.

Discussion of anything and everything relating to chess playing software and machines.

Moderator: Ras

User avatar
Steve Maughan
Posts: 1297
Joined: Wed Mar 08, 2006 8:28 pm
Location: Florida, USA

Re: It's Fabien.

Post by Steve Maughan »

Xann wrote: Sat Aug 02, 2025 6:15 am
Steve Maughan wrote: Fri Aug 01, 2025 11:39 pmHave you looked at the zig programming language? I think it could be perfect for chess programming.
Steve, I have not; I want unlimited abstractions.

My understanding is that while Rust is a candidate replacement for C++, Zig is the same for C; that's why I haven't looked. But I feel that you are right, and for chess, Zig could be perfect! One thing I like about it is the separation between 'release safe' and 'release fast' modes; Rust only has the former, with bound (and other) checks.

If you think Zig deserves it, I will have a closer look and report back.
Fabien — I think it’s worth a closer look. It has everything a chess programmer needs out of the box eg vectorization, popcount, bit-set-first and even pext. It also cross compiles to everything. And of course there is the emphasis on execution speed.

— Steve
http://www.chessprogramming.net - Juggernaut & Maverick Chess Engine
Dann Corbit
Posts: 12792
Joined: Wed Mar 08, 2006 8:57 pm
Location: Redmond, WA USA

Re: It's Fabien.

Post by Dann Corbit »

Of course we are very glad to hear from you.
I will be interested to discover how strong a rust engine can become.
Will it use NN evaluation or hand-tuned eval?
Taking ideas is not a vice, it is a virtue. We have another word for this. It is called learning.
But sharing ideas is an even greater virtue. We have another word for this. It is called teaching.
Xann
Posts: 132
Joined: Sat Jan 22, 2011 7:14 pm
Location: Lille, France
Full name: Fabien Letouzey

Re: It's Fabien.

Post by Xann »

Dann Corbit wrote: Sat Aug 02, 2025 9:26 pm Of course we are very glad to hear from you.
Dann, you have a unique and IMO deep perspective on engine development. There are two posts of yours that come to mind.

The first one was about 'tweaking'; you probably remember that one, it's a bit old. You explained that the only difference between the top engines and normal ones is perseverance, perhaps implying that there was no secret knowledge. Most programmers just 'stop tweaking' after a while; it's not fun.

I read the second one more recently, but unfortunately I don't remember it well. It was a bit similar; you stated that there were two stages in engine development. Something like a growing phase where features are added, and a second phase that is more about tuning and possibly took many years.

Since I don't remember clearly, could you please reformulate your vision or find a reference? Both of your posts were so clear to me, and made me understand better.
I will be interested to discover how strong a rust engine can become.
Rust is obviously not going to help with engine strength.

Furthermore, I saw that quite a few engines are already written in Rust; some of them NNUE. So, for sure I am not going to contribute to your curiosity; sorry about that.

It's more about 'how to organise software', make it flexible, minimise the mistakes, and that's a big interest of mine. That's why I write engines from scratch (in various games) so many times. There is always a slightly different way to express things.

While a lot of Rust is just painful for chess engines (e.g. no global variables), I think overall it made me see some things more clearly. Like search data; I don't keep it around after a move anymore. In C++, I never asked myself if I actually needed it; it felt convienient to leave it there.
Will it use NN evaluation or hand-tuned eval?
It's not easy for me to give a clear answer to that apparently basic question.

Conceptually, for evaluation, I separate 'features' and 'scoring'. A feature would be a property of the position, such as a white knight on e5, while scoring would be transforming that feature (or a combination) into cp units. Now I can answer better.

The design for this engine follows one I wrote two years ago. Features are fully HCE; old-school engine like the previous Senpai. However the scoring will be more fluid: tables of weights computed by machine learning. I call this 'table-based evaluation', and that's what I use in other games like Othello and draughts. 'Look-up evaluation' would also be a good name. It replaces the numerous multiplications of an NN by array lookups. An eval like Pesto is already in that category.

No NN in my original plan (or the engine from two years ago). That could give this one a unique style, somewhere between HCE and full-NN. From the few experiments I ran on the other engine, that gives Senpai more king-safety awareness. That isn't saying much, however; my engines have always been endgame players, not king attackers.

If the results are too disappointing, I might add a small NN as an afterthought, for an Elo boost. Then it would really be a mix between HCE and full NN, with accordingly a mixed performance. Regardless of what I end up doing, it won't be competitive with optimised NNUE; that much is obvious.

Summary: HCE features, table-based scoring (rare), no NN or a small one later.

Fabien.
Dann Corbit
Posts: 12792
Joined: Wed Mar 08, 2006 8:57 pm
Location: Redmond, WA USA

Re: It's Fabien.

Post by Dann Corbit »

Xann wrote: Sun Aug 03, 2025 2:15 am
Dann Corbit wrote: Sat Aug 02, 2025 9:26 pm Of course we are very glad to hear from you.
Dann, you have a unique and IMO deep perspective on engine development. There are two posts of yours that come to mind.

The first one was about 'tweaking'; you probably remember that one, it's a bit old. You explained that the only difference between the top engines and normal ones is perseverance, perhaps implying that there was no secret knowledge. Most programmers just 'stop tweaking' after a while; it's not fun.

I read the second one more recently, but unfortunately I don't remember it well. It was a bit similar; you stated that there were two stages in engine development. Something like a growing phase where features are added, and a second phase that is more about tuning and possibly took many years.

Since I don't remember clearly, could you please reformulate your vision or find a reference? Both of your posts were so clear to me, and made me understand better.
I think your process of chess engine development is ideal. What I mean by that is that it is far more important to make it right than to make it fast, especially at first. Once your engine is nearly bug free, you can make rapid progress. If the engine is buggy, a change that should cause a big improvement will do surprisingly bad things. Perhaps there is a color inversion somewhere, or a hashing bug. Once the engine is solid, then there is a foundation to stand on. Your use of asserts in fruit took my breath away. The best I have ever seen. And one could easily read from the asserts what you intended the code to do. Better, really, than well commented code.
I am glad of the NN revolution. Too many people jealously guarded their evaluation, which was nothing more than a collection of terms from chess books and chess papers properly quantified. Hans Berliner's work was a very early example of this and contains things that were missed for decades. Evaluation, of course, is quite important. A good evaluation helps good move ordering. Good move ordering keeps the search efficient. But now that NN is doing the heavy lifting for evaluation, chess programmers can invenst in the place that all of the huge innovations come from: the search. The first miracle was Alpha-Beta, then null move pruning, etc. These are the sorts of things that cause jaw dropping leaps in quality of the engine output.
I think that the problem with all chess programming is that the work required for excellence is surprising. And the work required for elegance is even more surprising. Any programmer can write an engine that plays chess. But it seems that there are diminishing returns to make it play well. And the public team engines like Stockfish and LC0 are a mixed blessing. They have enormous resources both for programming and test. How can a tiny team deal with an opponent such as that? On the other hand, we can look at what they have done and learn a great deal very rapidly. And we can get, for near nothing, a chess engine that can analyze our games for us as well as Magnus Carlsen. The one big negative of the NN engines is that they are black boxes. It would be nice if the engine could explain, in plain language, what it liked about a move. Maybe an edge pawn, which used to be weak in the opening is now powerful because we are in the endgame and there is another distant pawn which will make both difficult to chase. Maybe there is a material imbalance that the engine knows (like the famous bishop pair rule) but we have no idea is important for a given position. Maybe the reason it loves a position was mostly defined by the search {as Christophe Theron explained, "Search is also knowledge"}. The reason that it urks me is that I really want to know WHY a position is better, and not just examine the number. NN evals do not really tell us that.
I will be interested to discover how strong a rust engine can become.
Rust is obviously not going to help with engine strength.

Furthermore, I saw that quite a few engines are already written in Rust; some of them NNUE. So, for sure I am not going to contribute to your curiosity; sorry about that.

It's more about 'how to organise software', make it flexible, minimise the mistakes, and that's a big interest of mine. That's why I write engines from scratch (in various games) so many times. There is always a slightly different way to express things.

While a lot of Rust is just painful for chess engines (e.g. no global variables), I think overall it made me see some things more clearly. Like search data; I don't keep it around after a move anymore. In C++, I never asked myself if I actually needed it; it felt convienient to leave it there.
I think it may be the ideal language. If it forces a disciplined and engineering approach, that may result in a better engine. If you can move the branching factor from 1.3 to 1.2, it would devastate engines that can perft a rust engine into the ground. After the eval is excellent, the power comes from the search, and it is the search that is king (IMO).
Will it use NN evaluation or hand-tuned eval?
It's not easy for me to give a clear answer to that apparently basic question.

Conceptually, for evaluation, I separate 'features' and 'scoring'. A feature would be a property of the position, such as a white knight on e5, while scoring would be transforming that feature (or a combination) into cp units. Now I can answer better.

The design for this engine follows one I wrote two years ago. Features are fully HCE; old-school engine like the previous Senpai. However the scoring will be more fluid: tables of weights computed by machine learning. I call this 'table-based evaluation', and that's what I use in other games like Othello and draughts. 'Look-up evaluation' would also be a good name. It replaces the numerous multiplications of an NN by array lookups. An eval like Pesto is already in that category.

No NN in my original plan (or the engine from two years ago). That could give this one a unique style, somewhere between HCE and full-NN. From the few experiments I ran on the other engine, that gives Senpai more king-safety awareness. That isn't saying much, however; my engines have always been endgame players, not king attackers.

If the results are too disappointing, I might add a small NN as an afterthought, for an Elo boost. Then it would really be a mix between HCE and full NN, with accordingly a mixed performance. Regardless of what I end up doing, it won't be competitive with optimised NNUE; that much is obvious.

Summary: HCE features, table-based scoring (rare), no NN or a small one later.
I actually like this approach. Such an engine can explain to me why it made a given move by dividing the evaluation into components so that I can see what was most important and what was of lesser importance. If we play chess just to get the highest number on some rating list, then I think it is pointless. I would much rather learn something than win a game any day of the week.
Fabien.
Taking ideas is not a vice, it is a virtue. We have another word for this. It is called learning.
But sharing ideas is an even greater virtue. We have another word for this. It is called teaching.
User avatar
Steve Maughan
Posts: 1297
Joined: Wed Mar 08, 2006 8:28 pm
Location: Florida, USA

Re: It's Fabien.

Post by Steve Maughan »

Xann wrote: Sun Aug 03, 2025 2:15 am…my engines have always been endgame players, not king attackers.
Why do you think this is? Could the extra endgame strength come from storing the upper and lower bounds in the transposition table, (which most other engines don’t do)?

— Steve
http://www.chessprogramming.net - Juggernaut & Maverick Chess Engine
Xann
Posts: 132
Joined: Sat Jan 22, 2011 7:14 pm
Location: Lille, France
Full name: Fabien Letouzey

Re: It's Fabien.

Post by Xann »

Steve Maughan wrote: Sun Aug 03, 2025 8:05 pm
Xann wrote: Sun Aug 03, 2025 2:15 am…my engines have always been endgame players, not king attackers.
Why do you think this is? Could the extra endgame strength come from storing the upper and lower bounds in the transposition table, (which most other engines don’t do)?

— Steve
Here's my rationale for keeping two bounds in TT. When you start an engine, you have space. Much later, might come a time when you play with alpha-beta windows in search; something like singular extensions. I think the test will be more trustworthy if you have two bounds at that time. Later, you can test if they contribute to anything, and perhaps they don't, but you didn't run the test too early. Heuristics are related.

King safety I just hate, so I guess I always sabotaged it by not caring. It's nearly the same in all three engines. In the table-based evaluation, it will be different and include tropism.

Now about endgame; in Fruit it's probably tapered eval (game-phase interpolation). That allows anticipation, like moving the king earlier. And king distance in passed-pawn evaluation. For Senpai, it could be search. Since I used testing from the beginning, every little decision contributed to something.

Fabien.