Re: Poor man's neurones
Posted: Thu Feb 28, 2019 8:06 pm
I don't fully understand, but i think you make it unnecessarily complicated.PK wrote: ↑Thu Feb 28, 2019 6:46 pm I have found a working algorithm. I'm not claiming that it is any better than real neural network, and it is inefficient form traditional point of view, as the engine maintains twice as many piece/square tables. However, it does not cause too big a slowdown within a traditional evaluation function, and is not a black box (one can tell what it is doing). I am going to write more about it in the next couple of months, here comes the overwiev:
1. Two competing parameters describing the same aspect of evaluated chess position are needed. They will be called hypothesis1 and hypothesis2. It helps if these scores are large, therefore in the first experiment two different sets of piece/square table score has been used.
2. Each of the hypotheses has initial weight associated with it: percentage1 and percentage2. The weights need not to be equal or to sum up to 100.
3. We calculate a difference between both hypotheses, henceforth called delta:
delta = std::abs(hypothesis1-hypothesis2);
4. Value of delta is used to calculate the shift factor. The first formula that worked has been:
shift = std::min(sqrt(delta), 20);
It is possible that sigmoid or logarithmic function will work better. All that is needed is a reasonable way of using delta to arrive at a value not greater than either of percentages.
5. We apply shift as follows
if (hypothesis1 > hypothesis2)
rebalanced = hypothesis1 * (percentage1-shift)
+ hypothesis2 * (percentage2+shift)
if (hypothesis1 < hypothesis2)
rebalanced = hypothesis1 * (percentage1+shift)
+ hypothesis2 * (percentage2-shift)
Note that in example code hypothesis that proposes higher score is less trusted and its score diminished accordingly. This approach has certain philosophical appeal (evaluation function would be awarded for not trusting itself), but should not be treated as inherent property of the algorithm. In fact, my later tests with two sets of mobility values ended up with engine giving precedence to the higher-scoring set of values.
The first experiment used scores obtained from two distinctly different sets of piece-square tables: one tuned with Texel tuning method, the other hand-made. Preferring tuning to human intuition, I gave Texel-tuned set 75% weight, whereas the handmade set got 25%. I used these two sets of values in my private chess engine, rated probably around 2900 Elo, getting a small positive score for a version with rebalancing despite slowdown caused by calculating two competing piece/square table scores.
Next, I run a match between two versions of my program using rebalancing algorithm: one using the formula shown above, the other – increasing the shift factor to:
shift = std::min(sqrt( (5 * delta) / 4), 25);
This modification won a 500 game match with a 54% score, which is rather high for changing a single formula.
Third experiment was to play 1000 game matches against my open source engine Rodent III 0.275. Their score was 38,1% for version without rebalancing and 39,8% for a version with it.
Texel method is not the best. Batch Gradient Descent (Logistic regression) is much better..and you need a L2 penalty.
The Algorithm is not difficult. The learning database (example) is more important.
Do not forget: there's no perfect learning.