Testing New Versions all the time vs a new engine revolution

Discussion of anything and everything relating to chess playing software and machines.

Moderators: bob, hgm, Harvey Williamson

Forum rules
This textbox is used to restore diagrams posted with the [d] tag before the upgrade.
supersharp77
Posts: 891
Joined: Sat Jul 05, 2014 5:54 am
Location: Southwest USA

Re: Testing New Versions all the time vs a new engine revolu

Post by supersharp77 » Mon Jan 26, 2015 2:55 am

Yes the original poster poses the most difficult of questions.......he wants the most pleasing playing style and also a strength increase.....a virtually impossible proposition given the strength of todays 'top end' engines.....to do what he original poster wishes for the search criteria must change...the opening choices by the engine must be limited and the width vs depth adjustments must be somehow balance to give a pleasing style without sacrificing overall strength (early middle or late)......which may be 100% impossible......but that is what the human market wants.........a tough assignment indeed!! :)

TShackel
Posts: 313
Joined: Fri Apr 04, 2014 10:09 pm
Location: Neenah, WI, United States

Re: Testing New Versions all the time vs a new engine revolu

Post by TShackel » Mon Jan 26, 2015 2:59 am

cdani wrote:Tricking the search is very tempting, and a lot easier. You will see it again in the new version of Andscacs I will publish :-)

Anyway I advocate for some evolution in eval. For example a knight outpost is better if is related to something more; may be a king attack, may be a weak pawn, may be an open line... A pair of bishops the same, better if related to something more, an open or semi open column also...

May be it will be not that complicated to do this I propose, or other ideas. I will try when I reach what I consider some minimums in Andscacs. Or maybe someone take a similar path.
Hi. Oh, I definitely can see that it could be easier to focus on search. I can't say I blame you for thinking that. And it's true that selectivity in search (not looking at all moves equally deep, only relevant ones) is important for making a good program. And you may focus on that because it's more straightforward and doesn't require chess thinking. It's not easy to improve elo with smart evaluation changes only as suggested by a strong player like larry kaufman. But I really think it's good for computer chess to take that approach.

I know that stockfish must have a fantastic evaluation function after seeing how it plays. Many games are beautiful games. But I do peek at the development builds and see that most of the improvements are minor tweaks, and very rarely are there evaluation improvements that lead to elo increases. I did see some on king safety like Leto Atreides mentioned. But not many.

Hopefully you find a way to do this with andsacs.

Tim.

User avatar
asanjuan
Posts: 211
Joined: Thu Sep 01, 2011 3:38 pm
Location: Seville, Spain

Re: Testing New Versions all the time vs a new engine revolu

Post by asanjuan » Tue Jan 27, 2015 4:35 pm

Play against Rhetoric 1.4.1 and you'll have an engine with human style, but human style is based on errors. Rhetoric will lose a lot of entertaining games if the opponent plays perfectly.

Perfect play is less speculative and more objective, and so, boring.

User avatar
SMIRF
Posts: 91
Joined: Wed Mar 26, 2014 3:29 pm
Location: Buettelborn/Hessen/Germany
Contact:

Re: Testing New Versions all the time vs a new engine revolu

Post by SMIRF » Tue Jan 27, 2015 8:23 pm

If you want to see something really new, it has to be genuine. But that implies, that it would not be as strong as all those common copycats.

Looking at me I am very slowly working for to write a different new engine including some experiences of my first one. Here I am interested to have a new problem representation which is colorless in piece encoding and thus absolutely symmetric for both players especially in swapped positions.

The first goal is to have a reliable fast move generator for legal moves only having full information e.g. like associated check threats.

This is all done without the chance of creating a new top winner engine.

carldaman
Posts: 1952
Joined: Sat Jun 02, 2012 12:13 am

Re: Testing New Versions all the time vs a new engine revolu

Post by carldaman » Tue Jan 27, 2015 10:45 pm

supersharp77 wrote:Yes the original poster poses the most difficult of questions.......he wants the most pleasing playing style and also a strength increase.....a virtually impossible proposition given the strength of todays 'top end' engines.....to do what he original poster wishes for the search criteria must change...the opening choices by the engine must be limited and the width vs depth adjustments must be somehow balance to give a pleasing style without sacrificing overall strength (early middle or late)......which may be 100% impossible......but that is what the human market wants.........a tough assignment indeed!! :)
I'll settle any day for an engine that can play like a human, regardless of strength (well, not too weak, hopefully). It is indeed rare to be able to strengthen an engine while also enhancing its style.

I think Stockfish actually plays a darn good game for a top engine. Then, if you want to look lower, but still above 2700 (pretty strong, anyway ;) ) Rhetoric 1.4.1 is stylistically an amazing engine for its level, and its Material setting can be further tweaked.

Regards,
CL

User avatar
asanjuan
Posts: 211
Joined: Thu Sep 01, 2011 3:38 pm
Location: Seville, Spain

Re: Testing New Versions all the time vs a new engine revolu

Post by asanjuan » Wed Jan 28, 2015 12:35 pm

cdani wrote:
Leto wrote:I'm not, i'm enjoying every moment. I don't see these elo improvements as dumb, search improvements are very important. Finding the correct move a minute sooner can be the difference between winning or drawing, and sometimes losing.

Search improvements are of course not the only thing they've done, Stockfish for example recently improved its king safety, and for Stockfish 7 they're planning on improving its syzygy implementation which should increase its endgame accuracy.
Sure. But this is like extruding an existing technology, not like creating a new one. Tim is advocating for the second.
Trying to improve an engine only by evaluation is a hard work.
Note that every move produced by an engine is based on the probability to win the game implemented via evaluation function. Evaluation can only measure the static elements for a given position, and it is (and always will be) imperfect.

We have different techniques to tune our evaluation function, but there will be always holes and missed knowledge, and so, the horizont effect is always present.
This is why working on search can lead to faster elo improvements. The hidden and missed factors are covered by the search by reaching a ply more.

So the question is, what knowledge implement?

The state of the art today of top engines evaluation cover the general aspects of the game, and with a lot of testing, they have reached a good balance between risk & reward, leading to super GM play.

When we add a new piece of knowledge in the evaluation function we must consider that the extra computing time pays off. So again, what knowledge implement?

The only way is try-and-error. Testing and more testing.... Unless we can build a technology to discover new patterns and new knowledge automatically from existing games, so this technology could act as a grandmaster for us saying things like "hey, you must improve the R vs PPP endgame".

We are still struggling on that.

Regards.

Post Reply