Progress on Rustic

Discussion of chess software programming and technical issues.

Moderators: hgm, Dann Corbit, Harvey Williamson

Forum rules
This textbox is used to restore diagrams posted with the [d] tag before the upgrade.
User avatar
mvanthoor
Posts: 986
Joined: Wed Jul 03, 2019 2:42 pm
Location: Netherlands
Full name: Marcel Vanthoor
Contact:

Re: Progress on Rustic

Post by mvanthoor » Tue Feb 02, 2021 11:08 pm

I've set a definitive 2m+1s gauntlet tournament running for Alpha 1, against 7 other engines, 72 games per engine, for a total of 504 games. That's a bit more than the 424 games the engine has played on CCRL, and the engines are different; but in a similar 1600-1725 range. At the end, I'll recalibrate this list against Rustic's CCRL 1696 rating.

I've plucked the old "PerftHashTable" code from a year ago from the version control system and included it in Rustic. Steps:

1. Rename some variables to turn this into a proper transposition table (now it is named as if its only useful for Perft).
2. Refactor where necessary
3 Create the infrastructure in the engine to manage the table
* Actually give the engine a private variable to hold the TT
* command-line option --hash (-h) because "t" is already in use for --threads (-t), to set the size
* adhere to a minim size if nothing is given (probably 32 or 64 MB, or a percentage of the available memory)
* UCI options to set the size
4. Include it in Perft, and test it there (it basically only has to keep the Zobrist hash, depth, and node count)
5. Include it in the search and extend it where necessary.

When it works, I'l be able to compile Rustic Alpha 2, and then run the same gauntlet, replacing Alpha 1. At the end of that, I can recalibrate the list by setting the winner to the ELO of that same engine in the other list, and if all is well, Alpha 2 should have a (much) higher rating. I'll also run a self-play test.

Next for Alpha 3 and up will be extended move ordering, and somewhere along the line, XBoard.

When it has all the essential features (for now: TT, XBoard, Killer Moves and History heuristic), the Alpha part will be dropped from the name. So it will be, for example, Rustic Alpha 5 => Rustic 6. And then just count upward, like Chrome, Firefox, Stash, Arasan...
Author of Rustic.
Releases | Code | Docs

User avatar
Ronald
Posts: 127
Joined: Tue Jan 23, 2018 9:18 am
Location: Rotterdam
Full name: Ronald Friederich
Contact:

Re: Progress on Rustic

Post by Ronald » Wed Feb 03, 2021 5:41 pm

mvanthoor wrote:
Sun Jan 31, 2021 3:35 pm
I'm gonna make me some PaSTa... I've heard it goes well with PeSTo :P

In other words: let's see how far I can get with only PST's, before I get the urge to finally teach the darn thing that triple and quadruple pawns are not good. Triple and Quadruple SEEMS to be a good thing in beer, but I wouldn't know, because I don't like beer.

(Except for the kind that is associated with women... cherry beer. So sue me 8-))
From experience I know that for instance a Triple Karmeliet combines perfectly with pasta with PeSTO! (it combines well with everything, even with nothing :D )
Cherry beer is too sweet for me, except for "Kasteel Rouge"...

I think it's good choice to stick to PST for now. Optimizing your search is complex already, and most search search parameters are still the same and optimal for the current version also. The only changes I made are related to move ordering of quiet moves. A better evaluation leads to a better ordering of quiet moves, so you can prune/reduce more with a better eval. This seems to be up to a certain degree however.

User avatar
mvanthoor
Posts: 986
Joined: Wed Jul 03, 2019 2:42 pm
Location: Netherlands
Full name: Marcel Vanthoor
Contact:

Re: Progress on Rustic

Post by mvanthoor » Wed Feb 03, 2021 7:15 pm

Ronald wrote:
Wed Feb 03, 2021 5:41 pm
Cherry beer is too sweet for me, except for "Kasteel Rouge"...
That one is good. That one, and La Chouffe Cherry, are the most "beer-like" I'm willing to go.
I think it's good choice to stick to PST for now. Optimizing your search is complex already, and most search search parameters are still the same and optimal for the current version also. The only changes I made are related to move ordering of quiet moves. A better evaluation leads to a better ordering of quiet moves, so you can prune/reduce more with a better eval. This seems to be up to a certain degree however.
I intend to do the following:

- implement the TT
- Implement heuristics (Killer moves, history heuristics)
- Add importance to promotions
- Null Move
... etc ...
- Look into Texel tuning for the existing evaluation

Until I can't find anything to do in the search anymore, or the optimizations become so small and time consuming that adding evaluation terms is better return for the time spent, at that moment.
Author of Rustic.
Releases | Code | Docs

User avatar
mvanthoor
Posts: 986
Joined: Wed Jul 03, 2019 2:42 pm
Location: Netherlands
Full name: Marcel Vanthoor
Contact:

Re: Progress on Rustic

Post by mvanthoor » Sat Feb 20, 2021 2:11 pm

Hi :)

I've not posted in this thread for some time, but I'm still working on Rustic.

Since the release of Alpha 1, I've been looking into hash tables; what goes into them, what comes out of them, and how it's going to be used (in Chess, obviously; I already know what a hash table is). Most of the information pertains to the program's main hash table, and in most engines, it's interwoven within the code, with global variables and functions everywhere, for example. And, everything runs in a single thread.

All of this is not going to work in Rustic, or even in Rust itself. A year ago, I wrote a (very simple) Perft-only hash table, so I started with that. I rescued this from the repository, and built it into Rustic Alpha 1. It worked, but it was ugly (obviously; it's a year old, and the engine has changed a lot). So I set about to refactor the hash table:

- Put it into its own module
- Rename everything step by step to remove "Perft" from the hash table
- Add a command-line option "-h <size> / --hash <size>"
- Actually adapt the hash table to be able to use this
- Add the UCI-option "option name Hash default X min A max B" (In a generic way, so I can just add options to the engine without having to adapt the UCI interface, and the same options can also be announced by the XBoard interface)
- Add a trait (interface) for the hash table, so I can have different kinds of hash tables (main search, perft, pawn, maybe evaluation...)
- Turn the hash table in the engine into a dynamic object, so I can initialize different types of hash tables of different sizes, using the same code.

A few more things to do:
- Make the data in the hash table generic (perft and search data will be defained for now)
- Connect the hash table to the search and test how much extra strength this gives
- Add hash move sorting to the search and then test how much strength this gives on top of the hash table only

I just managed to make my hash table into a dynamic, thread-safe object in Rust. Finally. Now I just have to make the data it holds generic. I hate memory. And threads. And thread-safety. And generics, traits, templates, and interfaces. But not really... just need some more practice with this stuff in Rust. (It's the first time in the engine where I combine threads, generics, and traits all together.)

I _think_ I can say that, in languages such as C or C++, that I know what I'm doing, to keep stuff thread- and memory-safe. Therefore, picking up Rust wasn't very hard for me; my normal coding practices already yielded thread- and memory-safe code, and the compiler very rarely complained.

However, when combining traits (interfaces / abstract classes), generics (templates) with threads and shared objects, the compiler has been punching me in the face for three weeks. ("This interface is not thread-safe because...", "Your object also needs Send+Sync", "You can't use X if you also use Y"....) Now I'm not quite sure anymore: I either knew less than I thought, or I just need more practice with Rust's way of dealing with these concepts.

I hope it's the latter. For some of these concepts, I've found 2-3 ways of writing the syntax, as if the Rust team changed either the syntax, or the syntactic sugar, multiple times and kept all of the versions to not break other programs. That is confusing to say the least.

Dynamic hash table object (with size parameter) is working correctly though. Almost there, and then I can start testing Alpha 2.
Author of Rustic.
Releases | Code | Docs

User avatar
lithander
Posts: 161
Joined: Sun Dec 27, 2020 1:40 am
Location: Bremen, Germany
Full name: Thomas Jahn

Re: Progress on Rustic

Post by lithander » Sun Feb 21, 2021 9:57 am

mvanthoor wrote:
Sat Feb 20, 2021 2:11 pm
I've not posted in this thread for some time, but I'm still working on Rustic.
I really like the idea that you are maintaining a dedicated thread to chronicle the development progress of your engine. It makes an interesting read! (I didn't yet read all 17 pages, though)

I have been using C# almost exclusively in the past decade. It's seen the addition of a lot of features and enhancements since and while I'm not sure that it is still as beginner friendly as Java (which it started out to emulate) it eradicated almost all shortcomings it used to have.
So I don't have any real reason to change to something else except curiosity and in that regard Rust has always been an enticing blip on my radar.

But the way you are struggling to get something to work that would be "easy" in C/C++ doesn't sound very inviting. The whole point of going more low level for me is the bare-metal, higher risk/reward feel. Juggling with pointers at the risk of shooting yourself into foot, horribly. But fighting the borrow checker and the other issues you describe it doesn't sound very fun, really. :/

Would you say there's a real benefit to writing a chess engine in Rust over C/C++ or is it just the novelty?
Do you use the intrinsics to do something like popcount and are they as fast as you'd expect them to be?
Minimal Chess. My very first chess engine! Details on Youtube & Github

User avatar
mvanthoor
Posts: 986
Joined: Wed Jul 03, 2019 2:42 pm
Location: Netherlands
Full name: Marcel Vanthoor
Contact:

Re: Progress on Rustic

Post by mvanthoor » Sun Feb 21, 2021 1:50 pm

lithander wrote:
Sun Feb 21, 2021 9:57 am
I really like the idea that you are maintaining a dedicated thread to chronicle the development progress of your engine. It makes an interesting read! (I didn't yet read all 17 pages, though)
Thanks :) I can imagine you don't read everything. I could have written this in a blog, but that would just be one more thing to maintain, just to write some random thoughts.
I have been using C# almost exclusively in the past decade. It's seen the addition of a lot of features and enhancements since and while I'm not sure that it is still as beginner friendly as Java (which it started out to emulate) it eradicated almost all shortcomings it used to have.
So I don't have any real reason to change to something else except curiosity and in that regard Rust has always been an enticing blip on my radar.
If you're coming from C# or Java (or worse, Python, PHP, or other interpreted languages) where the language handles everything (which thus, costs speed), then Rust might be very difficult to get into.
But the way you are struggling to get something to work that would be "easy" in C/C++ doesn't sound very inviting. The whole point of going more low level for me is the bare-metal, higher risk/reward feel. Juggling with pointers at the risk of shooting yourself into foot, horribly. But fighting the borrow checker and the other issues you describe it doesn't sound very fun, really. :/
That is the entire problem: Rust SEEMS hard, because, in the beginning, the compiler will punch you in the face over and over again. I am _convinced_ that there are many C/C++ programs around that work because of luck, not because of good code. An example.

Code: Select all

fn combine_stuff (a: &String, b: &String) -> &String {
	let combo = format!("{}{}", a, b);  // <== this produces a new String
	
	&combo // <== return a reference to this string.
}
In C/C++, when you create the "combo" string and then return a reference to this string, this works... the caller will follow the reference, and the string will be there. The Rust compiler will complain, loudly. Why; what is the difference?

In C/C++, the creation of the string allocated memory. The pointer to that memory is in "combo". At the end, you return a reference to "combo" (thus, to the space in memory), and your function ends. All good. The caller of combine_stuff() now has a pointer to the memory space which was claimed for combo.

What did you just do?

Because combine_stuff() ended, the "combo" variable is gone. However, you passed out a reference to that memory space to the caller of combine_stuff(). Therefore, you effectively transferred responsibility of cleaning up the memory, from combine_stuff() (which claimed it), to the caller. Many people forget about this, and when the caller of combine_stuff() goes out of scope, the reference it holds to the memory that was first claimed for "combo" is... poof... also gone. You just created a memory leak.

Rust cleans up after itself. If a variable goes out of scope, it disappears. If that variable "owns" memory, then it will be freed. So as soon as combine_stuff() ends, in Rust, "combo"'s memory will be freed, and the variable will disappear... so you CAN'T return a reference to that memory, and thus you CAN'T create a memory leak. (There are ways of creating a variable in one place and then passing the reference around, but that involves a discussion of lifetimes: proving to the compiler that the memory you declared, is at least going to be around at least long enough for the reference to say valid. That is a somewhat advanced topic though, which I did have to use just yet, because of how Rustic is structured.)

Another, more dangerous example...

Code: Select all

main() {
	let vector1 = vec![1,2,3,4,5];
	let x = &vector1;
	vector1.push(6);
	println!("{}", x)
}
If you do this in C/C++, it will probably work. In Rust, it doesn't. Why?

When you create vector1, it allocates space for 5 integers on the heap. Then, you create the variable "x", which holds a reference to the first element of vector1. That's fine so far... Now, when you push another integer onto vector1, it will possibly (probably, actually) be moved in memory, because extra space is needed for the new integer. Where does your variable x now point to? Right... nothing.

In C/C++, it will possibly work, when the compiler allocates new memory, copies the contents of vector1, and then adds the new integer 6... and does NOT erase the memory where vector1 first pointed to. It's free, but it still holds the contents (for now!) it had before. If you use "x" way down the line, the memory might have been overwritten, and you get really strange results; and thus bugs and program crashes.

In Rust, the compiler knows that "x" will be invalidated if you change the memory vector1 points to. In a garbage-collected language, the language will make sure "x" changes (which is the reason why those languages are slower: they need to keep all this stuff correct behind the scenes). In Rust, "x" doesn't change. When you push an extra integer to the vector, Rust claims new memory, copies the vector, pushes the new element, and then invalidates all references to the old memory location, in this case "x".

This prevents you from:
1. Creating memory leaks.
2. Using garbage data way down the line.

The Rust borrow checker ENFORCES what you SHOULD be doing in C/C++ yourself. These are things most humans are very bad at. Mistakes are easily made. I personally am very good at it, but it also causes me to be a very slow programmer in C/C++, because in every function, I need to make sure *myself* that variables and memory are released, that I don't pass out pointers and references to memory that doesn't exist anymore... in Rust, I don't have to check this. I don't even have to release memory; the compiler releases it for me, and notifies me if I did something wrong and I'm still referencing it.

When writing functions, sequencial code, loops, etc... you get the hang of this very quickly in Rust, if you have any knowledge of memory management in C/C++, and you stop running into the borrow checker all the time. In fact, the borrow checker becomes a friend, like:

*compile*
"HEY, YOU! %#&^&@$%@$#%" ==> {crap code here}
:o "Thanks man... saved my neck from an awesome bug *fix*"

The borrow checker got your back.

BUT... when studying new concepts that bring stuff like this to the next level (because of threads, generics, traits, and combinations of those), you'll start running into the borrow checker again, while it (harshly) teaches you the reasons why this crap doesn't work. (Or if it does SEEM to work, in a language like C or C++, it's either unsafe or unstable.)

Yes, it can be frustrating, just like having a good teacher that INSISTS you're doing stuff 100% correctly can be infuriating... but after you're past the point of studying that particular subject, it's easy; you just do it correctly all the time, and you KNOW FOR SURE there won't be any problems. Not with the teacher, but also not with your technique/program.

But you're right. It doesn't sound very inviting to a newcomer. I was fortunate that, in university, I had the mentioned good but very strict teacher, that was basically a C/C++ borrow checker. Therefore, I have had no problems switching to Rust; everything between me and the programming language and its way of working is just dandy.

The only exceptions are cases like this, where I'm trying to find out WHAT THE HELL the syntax is that I need, and which traits (interfaces) I need to combine with what types to create what I want.

Which, in the case of traits, is more difficult than it needs to be, because the notation for types and traits is the same; and Rust doesn't use any prefixes in the standard library. As a trait is basically an interface, I have adopted the "Stuff implements IStuff" way of working, from C#. It's not completely "Rust-like" , but I don't care. I have written C++ types with "TObject" and "TStuff" for YEARS because I came from Borland Pascal/Delphi/C++, and it was the "Borland way" to start types with a T.

Borland Pascal: HashTable: THashTable
Borland C++: THashTable HashTable (<== I did this for a LONG time, even outside Borland products)
Modern C++: HashTable hashTable (I dislike this "dromedaris case")
Rust way: hash_table: HashTable (I highly prefer the snake_case for readability)

And to make a clear distinction between the HashTable type and the hash table's interface/trait, I have thus adopted "IHashTable" as a notation for that. I have even thought about reverting back to "THashTable" notation for the types, but I've decided against that. I'm fine with the snake_case: CamelCase notation.
Would you say there's a real benefit to writing a chess engine in Rust over C/C++ or is it just the novelty?
There are several benefits; not for chess software sepcifically, but for writing tiny, but powerful high performance programs. (Some of these beneficts can be seen as drawbacks by others, obviously.)

- It doesn't have the ambiguity of C/C++ type sizes. There is no "int" or "long"; every type has an exact length and is signed or not: iX (singed int of X = 8/16/32/64/128 bytes), uX (unsigned int), fX (float).
- There is only one compiler. If your platform has a compiler and the code compiles, it will work.
- Rust has editions. Programs written in "Rust 2015" will work and compile in any newer edition of the compiler. (The only newer edition is 2018 at the moment.) Therefore you can migrate from 2015 to 2018 by just changing that one line in cargo.toml, and then fixing the compiler errors. (Mostly syntactical.)
- cargo. It's the Rust "front-end" to the package manager, and the rustc compiler. If you know anything such asn npm, pip, nuget, apt, yum, pacman, or whatever package manager, then you'll understand cargo. You can interface with "rustc" and the package manager too, but "cargo" provides a lot of syntactic sugar by front-ending the compiler and package manager. (In the same way as "apt" and "aptitude" are front-ending dpkg in Debian.)
- crates.io. It's a curse and a blessing at the same time. Need a library such as crossbean? Just include it in cargo.toml, and you can use it as if it's part of the standard library. That is the blessing. The curse is that the Rust team has decided that many things need to be in libraries on crates.io instead of in the standard library, so it's almost impossible to write a Rust program without using crates.io. That in itself isn't a real problem in today's connected internet world; but it is that crates.io contains a huge amount of *JUNK* right along-side world-class libraries such as crossbeam (multi-threading enhancements), rand (random number generators), serde (json-handling), etc...
- The borrow checker. I just wrote half a book in this post why I like it... but I'm also sometimes frustrated by it.
- Because of the borrow-checker it's impossible to get memory leaks, race conditions, thread-unsafety, and so on... but note: it is STILL possible to just crash your program by doing x[10] into an array that only has 5 elements. The program will panic and abort, and NOT return random data. (i.e.: You can't read out of bounds without halting/crashing the program.)
- Using the same sort of mechanism of "one writer, many readers", the borrow-checker not only maintains memory saftey, but also thread-saftey and prevents race conditions.
- If you write good, Rust-idiomatic code, a Rust program is just as fast as a C or C++ program. My friendly perft optimization competition with the author of Weiss, at around this time a year ago, has proven that Rust and C have equal speed. Rustic and Weiss have near-identical move generators and make/unamke functions, and speed differences are due to compiler differences, settings, and how many incremental updates we do. (I just do all of them, even in perft, for testing purposes.) Weiss and Rustic are within +/- 3% of one another in running perft.
- Threading is built right into the language. No more PThreads WinThreads, or Boost; you just either use the standard library. If you want "special" stuff, you use (parts of) the CrossBeam or Tokio libraries.
- Static, but still small executables. The executable of my engine is now 821 kB, and the only thing you need to run this engine, is this executable. Everything is compiled into it. (It does link against gnu libc on Linux though, and MSVCRT on Windows. And against whatver the Mac and FreeBSD use.)
- If you REALLY need to get something done with memory directly, because you know you're right and the compiler should just shut up, you can always go unsafe { }. I did this in two places:
1. Not initializing the memory space of the move list with all 0 on making it, as Rust requires, because I KNOW that in the next statement, I'll be filling it with actual moves.
2. In the function for move ordering, I don't swap moves, I swap the actual pointers to the moves.
- Rust does not know NULL. It uses Option<T>. If a function could, but many not, returns a u32, you make it an Option<u32>. Then you HAVE to return either Some(u32), or None; and the caller MUST handle both cases. No NULL return values. No NULL pointers/references. No dangling pointer stuff.
- Rust REQURES you handle ALL cases in a match (= swtich/case on steroids). If you match on "u32", you will have to handle ALL the integers. If you only need to handle 1 to 5, you do so, and then make a general case "_" where you handle the rest, or do nothing. Same goes for many other language constructs. If you tell Rust to do X, and X can be done in several ways or with several outcomes, you MUST handle all the ways and/or all the outcomes OR sepcifically state that "nothing" needs to be done. This prevents a whole class of bugs on its own.
- Rust has a coding standard built in (rustfmt). It automatically formats your code, if you use an IDE that can run rustfmt in the background. (You can disable this or change the rules if you wanted to, but I wouldn't; basically the entire world uses rustfmt as it is.)
- clippy. Yes. It's named after clippy, from Office. But, as opposed to Grandpa Clippy, this one gives a lot of good information. It's an ultra-linter. If you do something that is not efficient (but passes the borrow-checker), Clippy will tell you why it's not efficient, and gives hints, a solution, or sometimes even a code snippet on how to fix this.
- Updates. Want to update Rust? "rustup update <enter>". Done. There's an update every 6 weeks.

The biggest problems in Rust are:
- The language is still heavily in development, so sometimes syntax may change.
- Sometimes the language doesn't have a well-known feature yet (but sometimes, it also has features of its own).
- Go is much more popular in the internet/webdev space, because it's "easier". Personally, I think it's junk... it looks and feels like "C for the internet, and a bit of javascript too"; with all the drawbacks of both. I'd personally never write Go voluntarily. Also, Google is "the boss"; Go == Google. The result is that Go libraries for web development are more extensive/mature than the ones in Rust (right now).
- GUI toolkit writers are partial to C++. There are many more GUI toolkits in C/C++ than there are in Rust; and the ones in Rust basically just interface to C/C++ ones.
- There are many more graphics libraries for C++ than there are for Rust.

In time, those problems will be solved, as Rust is used widely, and the Rust foundation is off to a good start with huge companies joining the foundation. Microsoft is experimenting with Rust, writing new pieces of software in it that may end up in Windows. There are talks of suppporting Rust as a language for developing modules for the Linux kernel, next to C.
Do you use the intrinsics to do something like popcount and are they as fast as you'd expect them to be?
I don't need to "use" intrinsics. The current version of the language uses them for me, if I compile with a cpu-target that supports them.

Strangely enough though, it's hard to say which is fastest; sometimes the POPCNT compile is faster than BMI2, sometimes both are slower than the generic 64-bit compile. It depends on the version of the compiler which is used, an thus the version of the LLVM-backend version the compiler itself is using. It also depends on the position I'm analyzing, or running perft on, with or without the hash table...

On average though, I can say that the BMI2 version is about 4-5% faster than the generic 64-bit version; most of the time the POPCNT-version is in between. It does fluctuate, however, as I said, based on the version of the compiler, the position, and the hash table size. All of the versions are within 4-5% of one another.

It is not going to be the case that your program suddenly smashes through positions 50% faster if you compile with POPCNT or BMI2 and gains hundreds of Elo. The only real, consistent difference, is between the 32-bit and 64-bit version: the 64-bit version is about twice as fast. An engine based on 64-bit bitboards is probably not a good idea if you're going to run a 32-bit executable. (Which is the case for the Raspberry Pi for now, but for that computer, I don't care; even on there, the engine runs with 500.000 nps, which is still strong enough to defeat me and many other players, when paired with a good search and evaluation. That version of the engine is for playing against, not for competing. Because of that engine, I'll also be adding a skill / level feature at some point.)
Author of Rustic.
Releases | Code | Docs

Ras
Posts: 1875
Joined: Tue Aug 30, 2016 6:19 pm
Full name: Rasmus Althoff
Contact:

Re: Progress on Rustic

Post by Ras » Sun Feb 21, 2021 4:01 pm

mvanthoor wrote:
Sun Feb 21, 2021 1:50 pm
Now, when you push another integer onto vector1
Not a problem in C unless you write such a push function yourself.
- It doesn't have the ambiguity of C/C++ type sizes. There is no "int" or "long"; every type has an exact length and is signed or not: iX (singed int of X = 8/16/32/64/128 bytes), uX (unsigned int), fX (float).
Solved since C99, and then even better because besides types like uint16_t, you also have e.g uint_fast16_t. Also threading as language feature has been solved since C11.
- There is only one compiler.
No compiler competition is a bad thing. And there can't be more compilers because Rust doesn't have an ISO standard. Rust is whatever the one and only compiler happens to do.
- cargo. It's the Rust "front-end" to the package manager, and the rustc compiler. If you know anything such asn npm, pip, nuget, apt, yum, pacman, or whatever package manager, then you'll understand cargo.
And if you hate the NPM hell, you'll also hate Cargo - for the same reasons.
- crates.io.
That's the continuation of the Cargo mistake. See how another compiler would be nice where people don't come from a web dev background like originally at Mozilla? In web SW, nobody cares whether you can build a five year old browser. In industrial SW, updating SW after even ten years without ongoing development in between is common enough.
- Using the same sort of mechanism of "one writer, many readers", the borrow-checker not only maintains memory saftey, but also thread-saftey and prevents race conditions.
One writer, many readers isn't enough to prevent race conditions across threads. That also requires atomic writes / reads, and maybe memory barriers, in particular on platforms with weak memory ordering such as ARM. Or guarding that with mutexes instead which also already have memory barriers.
- Static, but still small executables. The executable of my engine is now 821 kB
This is huge. My statically linked Windows executable is at 300 kB, but that already includes 112 kB for the opening book embedded in the executable, 24 kB for the KPK bitbase, and a lot of features that Rustic doesn't have yet.
- If you REALLY need to get something done with memory directly, because you know you're right and the compiler should just shut up, you can always go unsafe { }.
Which is how Rust code will pan out in practice. Right now, language enthusiasts enjoy fighting the compiler for weeks over simple matters, but that's no practical way in the industry. Even if you get more reliable code as final result, your competition will have grabbed the market by then already.
The biggest problems in Rust are:
No ISO standard, no compiler competition, Cargo, and crates.io.
Rasmus Althoff
https://www.ct800.net

User avatar
mvanthoor
Posts: 986
Joined: Wed Jul 03, 2019 2:42 pm
Location: Netherlands
Full name: Marcel Vanthoor
Contact:

Re: Progress on Rustic

Post by mvanthoor » Sun Feb 21, 2021 4:52 pm

Ras wrote:
Sun Feb 21, 2021 4:01 pm
Not a problem in C unless you write such a push function yourself.
So you're always 100% sure that a vector won't be relocated in memory when you push more elements onto it? That was not what I was taught.
Solved since C99, and then even better because besides types like uint16_t, you also have e.g uint_fast16_t. Also threading as language feature has been solved since C11.
That is just an addition of extra types. For some reasons, many people are not using them. Don't ask me why. I still see many chess engines just using "int" where an unsigned, one byte type would have been enough.
No compiler competition is a bad thing. And there can't be more compilers because Rust doesn't have an ISO standard. Rust is whatever the one and only compiler happens to do.
Why is that a problem? There's also only one C# compiler as far as I know, and that doesn't seem to be a problem. The compiler is open source, so anyone can create their own if they so wanted to. Actually... rustc uses LLVM as a backend, so the compiler actually doesn't even compile. It compiles to an intermediate language, which LLVM then takes over. There are works in progress for a version of rustc that can use GCC as a backend.

For me, that's enough. Most other open source compilers are just playing in the margins, or they're paid and not used by open source programmers.
And if you hate the NPM hell, you'll also hate Cargo - for the same reasons.
If a package manager is a bad thing, then the entire Linux eco-system is a bad thing. It's not cargo that's the problem. Or Crates.io in itself. The problem is the Rust foundation not wanting to curate crates.io by splitting off libraries that have become so important that they can be considered standards.
That's the continuation of the Cargo mistake. See how another compiler would be nice where people don't come from a web dev background like originally at Mozilla? In web SW, nobody cares whether you can build a five year old browser. In industrial SW, updating SW after even ten years without ongoing development in between is common enough.
crates.io keeps all old versions of the packages, and Rust code that is not built using a nightly version, can always be compiled by newer compilers. That makes installing dependencies MUCH easier; and if need be, you can even have different versions of the same dependency compiled into your code, if you have two libraries each requiring a different version.

You don't ever have to track down part of the source code of a program because you're now missing THAT dependency for THIS library. You just compile the program, and all dependencies are satisfied automatically.

The problem isn't the package manager cargo, or crates.io by itself. The problem is the Rust foundation not wanting to curate crates.io, or to set up a different (curated) repository with big, well-maintained libraries. They're convinced that "the community will decide what is best, and the rest will just die off."
One writer, many readers isn't enough to prevent race conditions across threads. That also requires atomic writes / reads, and maybe memory barriers, in particular on platforms with weak memory ordering such as ARM. Or guarding that with mutexes instead which also already have memory barriers.
Rust can do all of that. Actually, when using threads, most of those things are required.
This is huge. My statically linked Windows executable is at 300 kB, but that already includes 112 kB for the opening book embedded in the executable, 24 kB for the KPK bitbase, and a lot of features that Rustic doesn't have yet.
That is because every operating system (including Windows) includes some C and/or C++ runtime. No operating system includes a Rust runtime. 841 kB is still smaller than the 5 bazzillion files .NET produces when creating an executable... or, when compiled statically, having an executable of 20 MB or so. A bare-bones Electron application is about half a gigabyte or thereabouts, at least last time I looked. Nobody seems to think that any of these things are problem.

I'll take an 840 kB executable so I don't have to program in C/C++ anymore.
Which is how Rust code will pan out in practice. Right now, language enthusiasts enjoy fighting the compiler for weeks over simple matters, but that's no practical way in the industry. Even if you get more reliable code as final result, your competition will have grabbed the market by then already.
That is not how it works. Language enthusiasts don't enjoy fighting the borrow checker. They don't, except:

- Either they write unreliable code that would have been error-prone and crash-happy if written like that in about any other language
- They don't know the Rust ecosystem well enough yet: which traits to use when and where, most of the time. That was my issue when implementing the generic hash table. Most of the time, I can write Rust code as fast, or faster, than I could with C/C++ code in the past, and be MORE sure that it works correctly.

Now that I have created this generic, thread-safe, dynamic hash table, which combines traits, generics, and threads in one concept, I'm sure I don't need so many hours to do this again.

The only reason when you really NEED unsafe code, is when you're actually going to do something that either IS unsafe (because it's faster: such as me not initializing the move list array because I know it'll be filled in the next statement), or must inherently be unsafe (writing parts of compilers or operating systems). If you need that, you can have it. If you don't need it, your code can be as fast as C/C++ code, but with safety assurances similar to garbage collected code.
The biggest problems in Rust are:
No ISO standard, no compiler competition, Cargo, and crates.io.
The ISO standard will come. The GCC-backed version (or other) compilers will come. And at some point, the Rust foundation will probably set up either a curated crates.io, or a different independent repository to get rid of the (old) junk. Rust is only 5 years out of the experimental stage.

The one thing I can say with regard to userfriendliness of the language is that there are many C/C++ chess engines I gave up on compiling because of missing dependencies, actual problems within the code because it was just written for THAT compiler and none other, or non-existent build scripts. Of all of the Rust programs I wanted to compile, the only thing I had to do was "cargo build --release" (on either Windows or Linux) ... and there's my executable. The only programs this didn't work with, were the ones that used nightly features of a certain version of the compiler that had been dropped since then. (The chess engine Crabby, most notably.)

Rust has its problems. It's not perfect. But, IMHO, they are much less than the tangled web of problems that exist in C and C++; and some of the problems exist simply because Rust doesn't exist very long yet.
Author of Rustic.
Releases | Code | Docs

User avatar
lithander
Posts: 161
Joined: Sun Dec 27, 2020 1:40 am
Location: Bremen, Germany
Full name: Thomas Jahn

Re: Progress on Rustic

Post by lithander » Sun Feb 21, 2021 8:43 pm

Interesting read on the pro's and cons of Rust. Thanks Rasmus for providing the counter-arguments! I didn't follow the development of C++ in the recent years. Stuff like 'let' still looks strange to my eyes and all I could think when seeing your C++ examples was "that's why I prefer C#" but of course having a garbage collector is a level of abstraction you maybe don't want to afford in competitive, performance critical programming. I'm not meaning to argue against there being valid use-cases for low level languages.
mvanthoor wrote:
Sun Feb 21, 2021 1:50 pm
- If you write good, Rust-idiomatic code, a Rust program is just as fast as a C or C++ program. My friendly perft optimization competition with the author of Weiss, at around this time a year ago, has proven that Rust and C have equal speed. Rustic and Weiss have near-identical move generators and make/unamke functions, and speed differences are due to compiler differences, settings, and how many incremental updates we do. (I just do all of them, even in perft, for testing purposes.) Weiss and Rustic are within +/- 3% of one another in running perft.
That's an interesting testcase to compare programming language performance. Did these near-identical generators get ports to any other languages, too? I would be interested in how C# does in that comparison, obviously! ;)
Minimal Chess. My very first chess engine! Details on Youtube & Github

User avatar
mvanthoor
Posts: 986
Joined: Wed Jul 03, 2019 2:42 pm
Location: Netherlands
Full name: Marcel Vanthoor
Contact:

Re: Progress on Rustic

Post by mvanthoor » Sun Feb 21, 2021 9:11 pm

lithander wrote:
Sun Feb 21, 2021 8:43 pm
That's an interesting testcase to compare programming language performance. Did these near-identical generators get ports to any other languages, too? I would be interested in how C# does in that comparison, obviously! ;)
Uh... no ports to other languages. I didn't port Weiss' move generator. But we did start from the same point, at least, kind of.

You may know the chess engine VICE, by Richard Allbert / BlueFever Software. It's the one with the ~85 video's long YouTube series. It inspired a large number of people to try their hand at chess programming. Some engines are directly derived from it (like Weiss by Terje), inspired by it (like Wukong by Maksim), and some engines get to thank the VICE video series for a massive refreshment and enlightenment in chess programming principles (like Rustic).

Weiss was derived directly from VICE. Terje refactored it, and then turned it into a magic bitboard engine, and then started improving it. AFAIK, in the current version of Weiss, there's hardly a line of code left from VICE.

I made a step-by-step plan, using BlueFever's video's as a guide, and started an implementation of my own engine from scratch, but "VICE-like" in setup. I had just done the implementation of the king and knight (as a mailbox engine), but I regretted the fact that it wasn't a bitboard engine. Thus I halted the development, and started reading about bitboards, and experimenting with it. In the end, I had a move generator that was half mailbox, and half bitboard based, and it resembled Frankenstein's monster somewhat. Different pieces that don't really fit, stitched, glued and bolted together in some sort of way.

I turned the King and Knight move generator parts into bitboards, and set about on a massive refactoring, which got the engine to basically the structure it now has, which is much more Rust-like and much less C-like. (Even though some improvements can still be done.)

Then I implemented Perft, downloaded and compiled Weiss (and Minic), compared both engines, and came to the conclusion that I must have been doing something wrong because Weiss was like 5x faster. So I started profiling, refactoring, and rewriting, at some points getting pointers (pun not intended) from community members here, until Rustic was as fast as Weiss, and I was happy about it. I thought about selling the move generator for 5 million euro's and then retire.

That didn't work, actually.

Terje obviously found out of course, that I had used Weiss as target practice for my engine's speed improvements, and so he improved Weiss in return. ( :evil: ) We "slugged it out" (in friendly fashion) over the course of 6 weeks or so, profiling, rewriting, scraping a few lines or allocations here or there, until there wasn't anything more to profile, scrape or rewrite. At that point, the move generators and make/unmake functions were near-identical, one written completely in C, the other in Rust (so, completely different structure, but the same functionality), and they were also near-identical in speed at +/- 40 million leaves/sec on my system.

At that point I added incremental updates of the PST values to the functions that moved the pieces. Incremental updates are very good for chess playing (because your evaluation doesn't need to calculate those values from scratch every time), but they're bad for perft because the piece movements do calculations that are not necessary for perft. That doesn't matter though: I now use perft for debugging purposes. For checking the working of the hash table, checking if the incremental values are the same as the one that would be created from scratch, that sort of thing.

Rustic is a chess engine, not a perft tool, so if a change improves the search/evaluation speed while it degrades perft speed, it was a good change.

There's nothing stopping you from implementing a magic bitboard move generator in C#... the code is all there, both in Weiss and Rustic (and many other engines, but I don't know them internally); Rustic even has a "wizardry" function that generates the magic numbers for you, that can be used in the bitboard generator. (I put that in because it's quite hard to actually find good, consistent information about how to generate that set of 128 numbers. That took me quite some time.)
Author of Rustic.
Releases | Code | Docs

Post Reply