Page 1 of 1

Re: Poor man's neurones

Posted: Thu Feb 28, 2019 8:06 pm
by tomitank
PK wrote: Thu Feb 28, 2019 6:46 pm I have found a working algorithm. I'm not claiming that it is any better than real neural network, and it is inefficient form traditional point of view, as the engine maintains twice as many piece/square tables. However, it does not cause too big a slowdown within a traditional evaluation function, and is not a black box (one can tell what it is doing). I am going to write more about it in the next couple of months, here comes the overwiev:

1. Two competing parameters describing the same aspect of evaluated chess position are needed. They will be called hypothesis1 and hypothesis2. It helps if these scores are large, therefore in the first experiment two different sets of piece/square table score has been used.

2. Each of the hypotheses has initial weight associated with it: percentage1 and percentage2. The weights need not to be equal or to sum up to 100.

3. We calculate a difference between both hypotheses, henceforth called delta:

delta = std::abs(hypothesis1-hypothesis2);

4. Value of delta is used to calculate the shift factor. The first formula that worked has been:

shift = std::min(sqrt(delta), 20);

It is possible that sigmoid or logarithmic function will work better. All that is needed is a reasonable way of using delta to arrive at a value not greater than either of percentages.

5. We apply shift as follows

if (hypothesis1 > hypothesis2)
rebalanced = hypothesis1 * (percentage1-shift)
+ hypothesis2 * (percentage2+shift)
if (hypothesis1 < hypothesis2)
rebalanced = hypothesis1 * (percentage1+shift)
+ hypothesis2 * (percentage2-shift)

Note that in example code hypothesis that proposes higher score is less trusted and its score diminished accordingly. This approach has certain philosophical appeal (evaluation function would be awarded for not trusting itself), but should not be treated as inherent property of the algorithm. In fact, my later tests with two sets of mobility values ended up with engine giving precedence to the higher-scoring set of values.

The first experiment used scores obtained from two distinctly different sets of piece-square tables: one tuned with Texel tuning method, the other hand-made. Preferring tuning to human intuition, I gave Texel-tuned set 75% weight, whereas the handmade set got 25%. I used these two sets of values in my private chess engine, rated probably around 2900 Elo, getting a small positive score for a version with rebalancing despite slowdown caused by calculating two competing piece/square table scores.

Next, I run a match between two versions of my program using rebalancing algorithm: one using the formula shown above, the other – increasing the shift factor to:

shift = std::min(sqrt( (5 * delta) / 4), 25);

This modification won a 500 game match with a 54% score, which is rather high for changing a single formula.

Third experiment was to play 1000 game matches against my open source engine Rodent III 0.275. Their score was 38,1% for version without rebalancing and 39,8% for a version with it.
I don't fully understand, but i think you make it unnecessarily complicated.
Texel method is not the best. Batch Gradient Descent (Logistic regression) is much better..and you need a L2 penalty.

The Algorithm is not difficult. The learning database (example) is more important.

Do not forget: there's no perfect learning.

Re: Poor man's neurones

Posted: Thu Feb 28, 2019 9:26 pm
by Michael Sherwin
I understand the appeal of NN in chess. It is theoretically powerful, theoretically human brain like and proven to be very competitive with the top engines that themselves do not take advantage of reinforcement learning. However, I do not believe NN is optimal because of its speed and need for additional and expensive hardware. And the fact that NN is beyond what most hobby engine authors are capable of.

So I have been pushing a truly poor man's NN equivalent well that has no NN but the idea is similar. I don't think anyone can understand its potential. I want to try it myself but my brain is not sharp enough anymore. However, I began with this idea back in 80386 and CM5000 days whenever that was. That is when I first attempted to teach myself programming. I wrote a simple material searcher that did not have castling, e.p. or pawn promotion. It worked but of course it did not play well. I was so happy that I got that far. But then I wanted it to play more intelligently and at that time I did not know of any open source engines and I had no idea how to write an evaluation function. I got alpha-beta out of an AI book and it was not chess specific. And I was impatient for results. So I tried the first thing I thought of. I counted all the AB cutoffs for each side in the tree above each root move and created a simple ratio of all computer cutoffs divided by all opponent cutoffs. Then I selected the root move with the best material score + its ratio.

I was amazed at its play. It opened e2e4 instead of a2a3. It loved to create pawn chains. It developed its pieces to active squares. It loved to threaten mate. And while its play was very logical at times it was often surreal. It even seemed to understand king safety to a degree. And it was not hung up on rule of thumb ideas like centralization unless it was good. Then I tested it against CM5000 in positions where castling was disabled. It never finished a complete game because it did not understand e.p. or promotion but it did get several winning positions.

I also added checkmates into the count at some point. The biggest problem was that it would always exchange when the material score was equal to a non exchange. So I tried modifying the ratio for exchanges. I remember some progress but that is when my life went topsy turvy and I had to stop working on it. I lost the source code in a hard drive failure. When I resumed programming chess in about 2004 I was determined to write a more traditional engine and RomiChess was the result in 2005. In 2007 my mom came down with Alzheimer's and being that she was a widow at the end of 2007 and could not stay at home by herself and since I was an only child it fell upon me to take care of her. All chess development stopped. By 2011 my health was going down fast from the stress and long hours of taking care of my mom. I have not recovered from that and besides I'm 61 now and programming was never easy for me in the first place. So I am just not mentally or physically up to the task to duplicate my previous experiment.

What is the idea behind the idea. Why does it work. How could such a thin idea ever compete with CM5000? There is a lot of information in that simple ratio. Mobility. Probability of checkmate. Probability of losing material. Maximising oneself while minimising the opponent. Control of space. Avoiding hanging pieces not because they ultimately lose material but because material is lost more often during the search. Weak square complexes especially around the king because it leads to worse material and more checkmates. And probably more. And yet it is just the beginning of a statistical driven search (SDS). The bottom line though is that the ratio is akin to the probability score of an NN. And it runs virtually at the node rate of a material only searcher.

And yet it seems like I can interest no one in trying this very simple idea that would take some authors a week or less to test.

Re: Poor man's neurones

Posted: Fri Mar 01, 2019 9:27 am
by mclane
Thanks for your very emotional text.
Yes I also think there is a way to do chess different then this Stockfish source code.
You maybe remember that Chris Whittington tried a different way by teaching the engine about king attacks and mate nets and trying to pull the opponent into TAL areas of the search tree where the normal programs of that days would not be capable to have an overview.
IMO the next step would have been to create a plan on the base of this knowledge,
And let the engine try to shuffle pieces to follow this plan.
IMO this would be intelligent, more intelligent then computing millions of nodes and searching 40 plies deep .

Re: Poor man's neurones

Posted: Fri Mar 01, 2019 1:07 pm
by mclane
Another idea was gandalf 2.1, at that time a dos program with gui,
It had 1 ply + extensions,
Early dedicated chess computers were b strategy,

Re: Poor man's neurones

Posted: Fri Mar 01, 2019 4:06 pm
by Michael Sherwin
mclane wrote: Fri Mar 01, 2019 9:27 am Thanks for your very emotional text.
Yes I also think there is a way to do chess different then this Stockfish source code.
You maybe remember that Chris Whittington tried a different way by teaching the engine about king attacks and mate nets and trying to pull the opponent into TAL areas of the search tree where the normal programs of that days would not be capable to have an overview.
IMO the next step would have been to create a plan on the base of this knowledge,
And let the engine try to shuffle pieces to follow this plan.
IMO this would be intelligent, more intelligent then computing millions of nodes and searching 40 plies deep .
I get your point and thanks for the kind response. What I described was only the very beginning of an idea. I just described what can be done at the root moves. However, every non leaf node is the root of the tree above it. Therefore the full search space can be intelligently searched based on statistical data. That is the bigger picture when it comes to an SDS search. In "SF" code the two Bishops are given a bonus. They are given that bonus whether it is deserved or not. In an SDS search if the two bishops are actually a benefit or not will show up in the statistics. In other words the two bishops are evaluated correctly without ever explicitly considering them. It is a way of getting at the truth while avoiding the use of stereotypical human thinking which at best is a best guess with tons of exceptions. And after a few years of development I imagine that SDS will also be able to generate "plans" but that is just an assumption as I have not given it much thought.

Re: Poor man's neurones

Posted: Sat Mar 02, 2019 12:27 am
by PK
What I do is precisely about planning. My private engine has two competing scores for piece/square tables (one symmetric, one slanted towards kingside), and two mobility scores (one linear, one non-linear). The algorithm I described calculates the difference between two scores and tries to decide which of them is right. Somehow in case of piece/square tables it gravitates towards the set of parameters that gives worse evaluation (distrusting the better one) and in case of mobility it gives boost to more optimistic evaluation. The preference obviously depends on the set of numbers my engine happens to use. Yet the act of choosing which set of parameters it so be trusted - this is about as close to planning as one can get (except for neural networks).

Re: Poor man's neurones

Posted: Sun Mar 03, 2019 5:59 pm
by mclane
Any step forward to planning is a step into a higher level of chess.