accepted!Rebel wrote: ↑Thu May 06, 2021 12:15 pm From the new GRL:
I think you owe Guenther a good bottle of wine :wink:Code: Select all
# PLAYER : RATING ERROR POINTS PLAYED (%) LOS W D L DRAWS 16 Seer 2.0.1 : 3097.1 27.2 476.0 700 68 95 375 202 123 29% 21 Seer 2.0.0 : 3029.8 28.9 552.0 900 61 52 425 254 221 28%
Seer 2.0.0
Moderator: Ras
-
- Posts: 4718
- Joined: Wed Oct 01, 2008 6:33 am
- Location: Regensburg, Germany
- Full name: Guenther Simon
Re: Seer 2.0.0
-
- Posts: 544
- Joined: Sun Sep 06, 2020 4:40 am
- Full name: Connor McMonigle
Re: Seer 2.0.0
That's a fantastic result for Seer! I believe I owe Guenther (and chrisw) as well. Thanks for bringing the TM issues to my attention.Rebel wrote: ↑Thu May 06, 2021 12:15 pm From the new GRL:
I think you owe Guenther a good bottle of wineCode: Select all
# PLAYER : RATING ERROR POINTS PLAYED (%) LOS W D L DRAWS 16 Seer 2.0.1 : 3097.1 27.2 476.0 700 68 95 375 202 123 29% 21 Seer 2.0.0 : 3029.8 28.9 552.0 900 61 52 425 254 221 28%
![]()
-
- Posts: 167
- Joined: Tue Mar 05, 2019 3:43 pm
- Full name: Archimedes
Re: Seer 2.0.0
As there is a built in support for sse instructions, on can use the sse2neon header file in the meanwhile (or may be permanently). It works for Koivisto, Sting and Winter and now for Seer (which is a lot faster now).connor_mcmonigle wrote: ↑Tue Apr 27, 2021 5:05 pm Seer's speed on ARM will likely be pretty disappointing until I add a NEON path though.
Seer 2.2.0 for Android:
https://app.box.com/s/a3o0qfhe7prd9lulaaesai0tq55sqydo
-
- Posts: 544
- Joined: Sun Sep 06, 2020 4:40 am
- Full name: Connor McMonigle
Re: Seer 2.0.0
Thanks, great work! Your compile relying on the sse2neon header seems to outperform the compiles I created previously on my phone (using Termux) relying on the generic fall back and auto vectorization by a fair bit. It would seem compilers struggle to efficiently auto vectorize dot products pretty universally as of yet.Archimedes wrote: ↑Fri Aug 06, 2021 10:48 amAs there is a built in support for sse instructions, on can use the sse2neon header file in the meanwhile (or may be permanently). It works for Koivisto, Sting and Winter and now for Seer (which is a lot faster now).connor_mcmonigle wrote: ↑Tue Apr 27, 2021 5:05 pm Seer's speed on ARM will likely be pretty disappointing until I add a NEON path though.
Seer 2.2.0 for Android:
https://app.box.com/s/a3o0qfhe7prd9lulaaesai0tq55sqydo
-
- Posts: 544
- Joined: Sun Sep 06, 2020 4:40 am
- Full name: Connor McMonigle
Re: Seer 2.0.0
I've released a new version (v2.3.0) of Seer here: https://github.com/connormcmonigle/seer ... tag/v2.3.0
It should be significantly stronger than the previous release (v2.2.0) and tested at >= 100 elo in self play. As with all previous versions >= v2.0.0 Seer is trained on unique data generated solely from Seer's search+EGTB and does not rely on training code/inference code etc. derived from other engines. Training code and data generation code can be found at https://github.com/connormcmonigle/seer ... e/selfplay.
I've attempted to simplify my naming conventions for binaries. As with the previous release, the SSE3+nopopcnt binary is marked as unofficial and I would prefer if testers refrained from using it for official rating lists/tournaments.
It should be significantly stronger than the previous release (v2.2.0) and tested at >= 100 elo in self play. As with all previous versions >= v2.0.0 Seer is trained on unique data generated solely from Seer's search+EGTB and does not rely on training code/inference code etc. derived from other engines. Training code and data generation code can be found at https://github.com/connormcmonigle/seer ... e/selfplay.
I've attempted to simplify my naming conventions for binaries. As with the previous release, the SSE3+nopopcnt binary is marked as unofficial and I would prefer if testers refrained from using it for official rating lists/tournaments.
-
- Posts: 1142
- Joined: Thu Dec 28, 2017 4:06 pm
- Location: Argentina
Re: Seer 2.0.0
Awesome release, Connor! Do you have plans to implement ponder and/or FRC?connor_mcmonigle wrote: ↑Fri Aug 13, 2021 3:27 am I've released a new version (v2.3.0) of Seer here: https://github.com/connormcmonigle/seer ... tag/v2.3.0
It should be significantly stronger than the previous release (v2.2.0) and tested at >= 100 elo in self play. As with all previous versions >= v2.0.0 Seer is trained on unique data generated solely from Seer's search+EGTB and does not rely on training code/inference code etc. derived from other engines. Training code and data generation code can be found at https://github.com/connormcmonigle/seer ... e/selfplay.
I've attempted to simplify my naming conventions for binaries. As with the previous release, the SSE3+nopopcnt binary is marked as unofficial and I would prefer if testers refrained from using it for official rating lists/tournaments.
Follow my tournament and some Leela gauntlets live at http://twitch.tv/ccls
-
- Posts: 544
- Joined: Sun Sep 06, 2020 4:40 am
- Full name: Connor McMonigle
Re: Seer 2.0.0
Haha. I think you're the answer to the question of why so many engines have FRC supportCMCanavessi wrote: ↑Fri Aug 13, 2021 4:00 amAwesome release, Connor! Do you have plans to implement ponder and/or FRC?connor_mcmonigle wrote: ↑Fri Aug 13, 2021 3:27 am I've released a new version (v2.3.0) of Seer here: https://github.com/connormcmonigle/seer ... tag/v2.3.0
It should be significantly stronger than the previous release (v2.2.0) and tested at >= 100 elo in self play. As with all previous versions >= v2.0.0 Seer is trained on unique data generated solely from Seer's search+EGTB and does not rely on training code/inference code etc. derived from other engines. Training code and data generation code can be found at https://github.com/connormcmonigle/seer ... e/selfplay.
I've attempted to simplify my naming conventions for binaries. As with the previous release, the SSE3+nopopcnt binary is marked as unofficial and I would prefer if testers refrained from using it for official rating lists/tournaments.


-
- Posts: 136
- Joined: Wed Aug 15, 2007 12:18 pm
- Location: Munich
Re: Seer 2.0.0
Congratulations! Incredible progress!connor_mcmonigle wrote: ↑Fri Aug 13, 2021 3:27 am I've released a new version (v2.3.0) of Seer here: https://github.com/connormcmonigle/seer ... tag/v2.3.0
It should be significantly stronger than the previous release (v2.2.0) and tested at >= 100 elo in self play. As with all previous versions >= v2.0.0 Seer is trained on unique data generated solely from Seer's search+EGTB and does not rely on training code/inference code etc. derived from other engines. Training code and data generation code can be found at https://github.com/connormcmonigle/seer ... e/selfplay.
I've attempted to simplify my naming conventions for binaries. As with the previous release, the SSE3+nopopcnt binary is marked as unofficial and I would prefer if testers refrained from using it for official rating lists/tournaments.
-
- Posts: 368
- Joined: Mon May 14, 2007 8:20 pm
- Full name: Boban Stanojević
Re: Seer 2.0.0
Had a quick look at Seer 2.3.0, comparing his eval in analysis with other engines I use (Komodo 8/12, Berserk 4.5.1, Slow 2.6 and I even tried an SF derivative for a minute). Although slower, his evaluation is excellent. I used some closed positions of the French and Ruy Lopez -- usually, in the Ruy Lopez there are no problems, but in the Petrosian var. of the French, engines tend to overestimate white advantage and to wander a bit. Seer probably too, but I thought much less. In some difficult positions I had analysed extensively, it found the "best" move (if it is the best) at very low depths. His first net a few months ago was sometimes a bit disconcerting, but already the previous version of Seer (2.2.0) I thought it was very mature, enough to use it daily. It seems that Connor's approach was really effective.
I initially thought that the differences among engines will be much lesser with the advent of NNs, but it seems it is not yet the case.
Anyway, for analysing games/repertoire, at my level especially, there is absolutely no need for the "top[est] of the shelve" (Seer is already at the "top"), and I even doubt it makes objectively a difference. I guess it is easier for a human to "understand" (if it is only possible) the positional assessment when the engine is slower. One has to try the variations anyway, and it takes more time for the human than the engine, so speed, over a certain point, becomes irrelevant. I should add a disclaimer: I personally prefer to use engines with original concepts -- it is not only a way to support the authors (although money would be preferable...) who, so often, have only the pleasure to know that their creation is used -- but from the short history of computer chess, being different allowed big jumps in general progress.
I have to add that I did not have the time to test it in tactical positions in this short lapse of time, but in combination with SlowChess, Berserk, I guess there will be no mistake.
Finally, I would be very grateful if the author added some usability options, like multi-pv, if it is possible.
I initially thought that the differences among engines will be much lesser with the advent of NNs, but it seems it is not yet the case.
Anyway, for analysing games/repertoire, at my level especially, there is absolutely no need for the "top[est] of the shelve" (Seer is already at the "top"), and I even doubt it makes objectively a difference. I guess it is easier for a human to "understand" (if it is only possible) the positional assessment when the engine is slower. One has to try the variations anyway, and it takes more time for the human than the engine, so speed, over a certain point, becomes irrelevant. I should add a disclaimer: I personally prefer to use engines with original concepts -- it is not only a way to support the authors (although money would be preferable...) who, so often, have only the pleasure to know that their creation is used -- but from the short history of computer chess, being different allowed big jumps in general progress.
I have to add that I did not have the time to test it in tactical positions in this short lapse of time, but in combination with SlowChess, Berserk, I guess there will be no mistake.
Finally, I would be very grateful if the author added some usability options, like multi-pv, if it is possible.
-
- Posts: 544
- Joined: Sun Sep 06, 2020 4:40 am
- Full name: Connor McMonigle
Re: Seer 2.0.0
Thanks!
Thanks for the kind words. I always find it interesting to see how Seer evaluates (or misevaluates!) positions as Seer is, to the best of my knowledge, one of only a handful of engines with an evaluation function not trained directly or indirectly from any other engine. Most other engines, at some point in their respective histories, trained/tuned their evaluation functions on data originating from Stockfish or some other top engine (such as Zurichess data (Stockfish) or Ethereal data (which itself was initially tuned on Zurichess data)). With a classical evaluation function and many self play iterations, that initial influence is probably mostly irrelevant anyways, but I still think it's cool. Others outside of Seer with this property are SlowChess, Leela and some of Daniel Shawul's networks (though I think the official Scorpio CNN was trained on Lc0 data? I'm not sure).matejst wrote: ↑Fri Aug 13, 2021 12:27 pm Had a quick look at Seer 2.3.0, comparing his eval in analysis with other engines I use (Komodo 8/12, Berserk 4.5.1, Slow 2.6 and I even tried an SF derivative for a minute). Although slower, his evaluation is excellent. I used some closed positions of the French and Ruy Lopez -- usually, in the Ruy Lopez there are no problems, but in the Petrosian var. of the French, engines tend to overestimate white advantage and to wander a bit. Seer probably too, but I thought much less. In some difficult positions I had analysed extensively, it found the "best" move (if it is the best) at very low depths. His first net a few months ago was sometimes a bit disconcerting, but already the previous version of Seer (2.2.0) I thought it was very mature, enough to use it daily. It seems that Connor's approach was really effective.
I initially thought that the differences among engines will be much lesser with the advent of NNs, but it seems it is not yet the case.
Anyway, for analysing games/repertoire, at my level especially, there is absolutely no need for the "top[est] of the shelve" (Seer is already at the "top"), and I even doubt it makes objectively a difference. I guess it is easier for a human to "understand" (if it is only possible) the positional assessment when the engine is slower. One has to try the variations anyway, and it takes more time for the human than the engine, so speed, over a certain point, becomes irrelevant. I should add a disclaimer: I personally prefer to use engines with original concepts -- it is not only a way to support the authors (although money would be preferable...) who, so often, have only the pleasure to know that their creation is used -- but from the short history of computer chess, being different allowed big jumps in general progress.
I have to add that I did not have the time to test it in tactical positions in this short lapse of time, but in combination with SlowChess, Berserk, I guess there will be no mistake.
Finally, I would be very grateful if the author added some usability options, like multi-pv, if it is possible.
As for usability enhancements, I plan to add proper mate score reporting, syzygy probing code and maybe multipv support (I'm still a little undecided here as it would require some extra complexity. I might try a different approach where one thread is used per line as multipv is only used for analysis anyways).