Windows Avx2
And this is not the issue. The engine is running. And playing.
Most GUIs dont care about PV-lines and ponder-moves, they just print them out but do not verify the moves. But cutechess-cli checks the moves and prints a warning.
So, it is an engine bug. But in most GUIs, it will be not recognized.
I checked the sse-Windows Binary, too. The same bug.
New engine releases & news H1 2022
Moderator: Ras
-
- Posts: 2701
- Joined: Sat Sep 03, 2011 7:25 am
- Location: Berlin, Germany
- Full name: Stefan Pohl
-
- Posts: 973
- Joined: Sat May 13, 2006 1:08 am
Re: New engine releases & news H1 2022
Besides the source code it would be fine to know more about the embedded NN(UE).Angle wrote: ↑Fri May 27, 2022 10:47 amI only know what is shared on the page of Sergey and Dmitry Kudryavtsev (sdchess.ru). Only binaries are posted there. I'll ask them if the author provides the source code.Guenther wrote: ↑Fri May 27, 2022 10:40 amWith or w/o source code?Angle wrote: ↑Fri May 27, 2022 7:24 am After a long break, Ivan Maklyakov released a new version of his engine Uralochka 3.35a The expected rating of the single-threaded version is 3200-3250 ELO (according to the CCRL scale)! You can download binaries for SSE, AVX2, and AVX512 (Windows & Linux) and read more information (in Russian) here:
http://sdchess.ru/news.htm
"Own" net or based on another engines net and if yes on which?
Interesting engine btw., my first "inofficial" tests (non CEGT!!) with 2'+1" showed a performance of ~~3240 on our scale, which would mean close to 3400 (!!) CCRL. But 200 games only...
And no problems here and at my colleague Jörg Burwitz with Shredder Classic, Arena, Banksia and Cutechess (w/o "cli"). I also tried Banksia on Linux, no problems too

-
- Posts: 544
- Joined: Sun Sep 06, 2020 4:40 am
- Full name: Connor McMonigle
Re: New engine releases & news H1 2022
I've not looked at it very closely, but on initial inspection, mostly looking at the function names, arguments and where it spends time, it doesn't seem an obvious clone. In general, it seems rather unoptimized (spending more time on move generation than accumulator updates/forward propagation!). Regardless, some explanation from the author regarding where it originated from and how its network was trained would probably be appreciated.
Code: Select all
10.71% Uralochka3.35a- Uralochka3.35a-avx2 [.] TranspositionTable::load
6.47% Uralochka3.35a- Uralochka3.35a-avx2 [.] Moves::gen_quiet
6.45% Uralochka3.35a- Uralochka3.35a-avx2 [.] Neural::accum_piece_add
5.82% Uralochka3.35a- Uralochka3.35a-avx2 [.] Neural::accum_piece_remove
5.55% Uralochka3.35a- Uralochka3.35a-avx2 [.] Board::move_do
4.74% Uralochka3.35a- Uralochka3.35a-avx2 [.] Game::search
3.86% Uralochka3.35a- Uralochka3.35a-avx2 [.] Moves::gen_kills
3.75% Uralochka3.35a- Uralochka3.35a-avx2 [.] Bitboards::poplsb
3.49% Uralochka3.35a- Uralochka3.35a-avx2 [.] TranspositionTable::prefetch
3.44% Uralochka3.35a- Uralochka3.35a-avx2 [.] Neural::accum_predict
3.40% Uralochka3.35a- Uralochka3.35a-avx2 [.] Moves::get_next
3.35% Uralochka3.35a- Uralochka3.35a-avx2 [.] __memmove_avx_unaligned_erms
3.19% Uralochka3.35a- Uralochka3.35a-avx2 [.] Neural::accum_piece_add
3.01% Uralochka3.35a- Uralochka3.35a-avx2 [.] std::__introsort_loop<__gnu_cxx::__normal_iterator<Move*, std::vector<Move, std::allocator<Move> > >, long, __gnu_cxx::__ops::_Iter_comp_iter<Moves::g▒
2.91% Uralochka3.35a- Uralochka3.35a-avx2 [.] Moves::attacks_all
2.79% Uralochka3.35a- Uralochka3.35a-avx2 [.] Neural::accum_piece_add
2.45% Uralochka3.35a- Uralochka3.35a-avx2 [.] Moves::moves_pawn
2.40% Uralochka3.35a- Uralochka3.35a-avx2 [.] Magic::index
2.25% Uralochka3.35a- Uralochka3.35a-avx2 [.] Moves::see
1.99% Uralochka3.35a- Uralochka3.35a-avx2 [.] TranspositionTable::save
1.88% Uralochka3.35a- Uralochka3.35a-avx2 [.] std::__insertion_sort<__gnu_cxx::__normal_iterator<Move*, std::vector<Move, std::allocator<Move> > >, __gnu_cxx::__ops::_Iter_comp_iter<Moves::get_nex▒
1.84% Uralochka3.35a- Uralochka3.35a-avx2 [.] Moves::kills_pawn
1.30% Uralochka3.35a- Uralochka3.35a-avx2 [.] Board::check_draw
1.09% Uralochka3.35a- Uralochka3.35a-avx2 [.] Moves::init_generator
1.05% Uralochka3.35a- Uralochka3.35a-avx2 [.] Board::move_undo
1.05% Uralochka3.35a- [kernel.kallsyms] [k] mutex_spin_on_owner
0.95% Uralochka3.35a- Uralochka3.35a-avx2 [.] Moves::add
0.95% Uralochka3.35a- Uralochka3.35a-avx2 [.] Game::quiescence
0.87% Uralochka3.35a- Uralochka3.35a-avx2 [.] Moves::SEE
0.72% Uralochka3.35a- Uralochka3.35a-avx2 [.] Moves::moves_king
0.53% Uralochka3.35a- Uralochka3.35a-avx2 [.] Moves::get_history
0.51% Uralochka3.35a- Uralochka3.35a-avx2 [.] Board::is_check
0.48% Uralochka3.35a- Uralochka3.35a-avx2 [.] Board::color
0.39% Uralochka3.35a- Uralochka3.35a-avx2 [.] Board::is_attacked
0.38% Uralochka3.35a- Uralochka3.35a-avx2 [.] Bitboards::bit_clear
0.37% Uralochka3.35a- Uralochka3.35a-avx2 [.] Moves::add
0.28% Uralochka3.35a- Uralochka3.35a-avx2 [.] Bitboards::bit_set
0.24% Uralochka3.35a- Uralochka3.35a-avx2 [.] TranspositionTable::TranspositionTable
0.23% Uralochka3.35a- Uralochka3.35a-avx2 [.] __memmove_avx_unaligned
0.23% Uralochka3.35a- Uralochka3.35a-avx2 [.] std::vector<TTCell, std::allocator<TTCell> >::_M_default_append
0.21% Uralochka3.35a- Uralochka3.35a-avx2 [.] Neural::accum_all_pieces
0.17% Uralochka3.35a- Uralochka3.35a-avx2 [.] Board::moves_init
-
- Posts: 6888
- Joined: Wed Nov 18, 2009 7:16 pm
- Location: Gutweiler, Germany
- Full name: Frank Quisinsky
Re: New engine releases & news H1 2022
Hi there,
from 0 to 100 ... viewpoint awareness level!
Again an interesting story!
Would like to have more information about the person and development.
Best
Frank
from 0 to 100 ... viewpoint awareness level!
Again an interesting story!
Would like to have more information about the person and development.
Best
Frank
-
- Posts: 60
- Joined: Sat Dec 11, 2021 5:03 am
- Full name: expositor
Re: New engine releases & news H1 2022
Expositor 2WN29
https://github.com/expo-dev/expositor/r ... /tag/2WN29
This version is roughly 100 Elo stronger than 2WQ23 (estimating from Cute Chess results).
https://github.com/expo-dev/expositor/r ... /tag/2WN29
This version is roughly 100 Elo stronger than 2WQ23 (estimating from Cute Chess results).
-
- Posts: 4718
- Joined: Wed Oct 01, 2008 6:33 am
- Location: Regensburg, Germany
- Full name: Guenther Simon
Re: New engine releases & news H1 2022
Smallbrain (new) 1.0 - 1.2
author: Max Allendorf (DEU)
https://github.com/Disservin/Smallbrain
Expositor 2WN29
https://github.com/expo-dev/expositor
author: Max Allendorf (DEU)
https://github.com/Disservin/Smallbrain
Expositor 2WN29
https://github.com/expo-dev/expositor
-
- Posts: 60
- Joined: Sat Dec 11, 2021 5:03 am
- Full name: expositor
Re: New engine releases & news H1 2022
Guenther found a linktime error with Windows builds that causes stack overflows, so I've re-released 2WN29: https://github.com/expo-dev/expositor/r ... /tag/2WN29
So sorry about that, everyone!
So sorry about that, everyone!
-
- Posts: 4718
- Joined: Wed Oct 01, 2008 6:33 am
- Location: Regensburg, Germany
- Full name: Guenther Simon
-
- Posts: 4718
- Joined: Wed Oct 01, 2008 6:33 am
- Location: Regensburg, Germany
- Full name: Guenther Simon
-
- Posts: 297
- Joined: Sat Jun 30, 2018 10:58 pm
- Location: Ukraine
- Full name: Volodymyr Shcherbyna
Re: Igel 3.1.0

Igel 3.1.0 at https://github.com/vshcherbyna/igel/releases/tag/3.1.0 (official executable binaries for Windows and IGN net).
This release brings adequate improvement in strength.
What's new:
- Scale eval at 1280
- More aggressive pruning of quiets when not improving
- Use 150 cp when razoring
- Avoid tt cutoff for rule50
- Improve statistics in SMP mode
- New network trained using 16B of d5-d8 data from Igel HCE/ign-1-139b702b with d16 validation using ign-1-139b702b
Regression run against Igel 3.0.5:
Long Time Control With Increment
Code: Select all
ELO | 30.13 +- 2.64 (95%)
CONF | 60.0+0.60s Threads=1 Hash=64MB
GAMES | N: 20000 W: 3906 L: 2176 D: 13918