That's an interesting concept.smatovic wrote: ↑Fri Jan 22, 2021 7:23 amI see, this seems to be about competition, tournaments and rating lists, what
is allowed and what not. I am not in the 3000+ Elo ballpark and currently not
active as chess programmer, so I guess it is up to the top programmers and
the organizers and audience to define what kind of competition is desired and
fair. But I wish to point out, that by restricting competition in this way, you
also restrict development process, imagine one comes up with an non-standard,
non-SF search algorithm, which offers new features, but uses the SF-NNUE
implementation and networks. Such an engine would offer something new,
innovative in combination of non-original work, and would be not allowed to
participate in tournaments and rating lists. Take as current example Ceres, the
alternative implementation of Lc0, it uses the Lc0 backends and networks but
has an alternative implementation of search, according to your statements, such
an engine would be not allowed to participate in tournaments and rating lists,
is this desired?
--
Srdja
In my humble opinion, permitting a large number engines all using an evaluation functionally identical to that found in Stockfish to flood rating lists would lead to an even greater stagnation in innovation. Were the copying of Stockfish's evaluation function to become widely accepted by tournaments and rating lists, any new engine developer would be foolish to use anything other than Stockfish's evaluation function for position evaluation in their engine. There's less motivation to explore your own solutions/novel ideas when you can just copy the best in the world. Diversity is generally good for evolutionary progress to an extent.
This isn't necessarily counter to your point. If the goal of rating lists and tournaments in their specification of uniqueness constraints for participants is understood to be to maximize progress in computer chess, there obviously exists some optima between the extremes of overly restrictive and too permissive (only one participant can use alpha-beta search vs. every participant is a verbatim copy of Stockfish).
My estimate is that allowing the copying of Stockfish's evaluation function pushes us way too far in the direction of too permissive if progress in computer chess is the end goal.