I would like to try this on the next version of satana, using the random move generator to collect statistics... but I'm afraid that surely dott. Hyatt have already try in '60 before I was born

Moderator: Ras
Multiplication is the same as adding the logarithms, so I think it only changes the scale of representation and not the relative ordering of positions. Unless you want to mix the operators. In a way we are already comparing probabilities, except that we silently map them onto a piece value scale using the (inverse of the) logistic function. In the range where games get won or lost (say, -2p to +2p), this mapping is almost linear. Outside that range it really doesn't matter what you do.stegemma wrote:Anybody has try the idea to multiply parameters instead of add?
Ah ok, that's right, I've not think from this point of view. Maybe it could be done if you have to start from the probability of a win, computed at runtime for any parameter in actual position. It is an experiment that I want to do but only hell knows what it leads to... and hell is the house of satana!hgm wrote:Well, multiplying numbers is equivalent adding their logarithms. So it isn't really different. You just have to apply some logarithmic (i.e. saturating) correction to the evaluation.
Not that it matters very much, though, as any monotonous mapping of the scores will eventually give the same search. It only matters if you compare scores that have a different mapping. Or apply a fixed additive delayed-loss bonus to them. This actually makes some silly behavior (like sacrificing a Rook on ply 1 to avoid the loss of a Queen on ply 20) go away.
Old Yamaha FM chips used this trick so they added logarithms and finally trasnformed the signal back using exp-table lookuphgm wrote:Well, multiplying numbers is equivalent adding their logarithms.
I am doing exactly that with my engine (Giraffe, based on machine learning).stegemma wrote:Ah ok, that's right, I've not think from this point of view. Maybe it could be done if you have to start from the probability of a win, computed at runtime for any parameter in actual position. It is an experiment that I want to do but only hell knows what it leads to... and hell is the house of satana!hgm wrote:Well, multiplying numbers is equivalent adding their logarithms. So it isn't really different. You just have to apply some logarithmic (i.e. saturating) correction to the evaluation.
Not that it matters very much, though, as any monotonous mapping of the scores will eventually give the same search. It only matters if you compare scores that have a different mapping. Or apply a fixed additive delayed-loss bonus to them. This actually makes some silly behavior (like sacrificing a Rook on ply 1 to avoid the loss of a Queen on ply 20) go away.
"+" or "*" equal for represent, but "+" much easy calculate for processor.stegemma wrote:Anybody has try the idea to multiply parameters instead of add? The idea is to use parameters as a probability to win, when some feature occurs. For sample, if I find that having a passed pawn lead to victory in 110% of the times, the value of the position can be multiply by 1.10. If I have a doubled pawn (and maybe some other event occurs) I can multiply positional value by 0.95 and so on. This maybe could give a bigger interrelation between parameters than just adding them.
I would like to try this on the next version of satana, using the random move generator to collect statistics... but I'm afraid that surely dott. Hyatt have already try in '60 before I was born
Thanks, I've missed your interesting post and today I've read it and played a little match. Your approach is very interesting, I've try with neural networks in the past, with no valid results, only something similar to random playing.matthewlai wrote:I am doing exactly that with my engine (Giraffe, based on machine learning).stegemma wrote:Ah ok, that's right, I've not think from this point of view. Maybe it could be done if you have to start from the probability of a win, computed at runtime for any parameter in actual position. It is an experiment that I want to do but only hell knows what it leads to... and hell is the house of satana!hgm wrote:Well, multiplying numbers is equivalent adding their logarithms. So it isn't really different. You just have to apply some logarithmic (i.e. saturating) correction to the evaluation.
Not that it matters very much, though, as any monotonous mapping of the scores will eventually give the same search. It only matters if you compare scores that have a different mapping. Or apply a fixed additive delayed-loss bonus to them. This actually makes some silly behavior (like sacrificing a Rook on ply 1 to avoid the loss of a Queen on ply 20) go away.
Probabilities can only be multiplied if they are independent, though, and most won't be.
For example, from an equal position, a queen odd would be worth almost infinity (you are almost surely going to win regardless of what the current eval is). But from a KPPPPvK situation, it would be worth about 1 (doesn't change the probability of winning, since the probability of winning was close to 1 to begin with).
Yeah it's really all about feature representation. A lot of times the intuitive representation is far from the best. People often choose the most intuitive representation without much thinking, and wonder why neural nets don't work well (and it's really not surprising that they don't work well).stegemma wrote:Thanks, I've missed your interesting post and today I've read it and played a little match. Your approach is very interesting, I've try with neural networks in the past, with no valid results, only something similar to random playing.matthewlai wrote:I am doing exactly that with my engine (Giraffe, based on machine learning).stegemma wrote:Ah ok, that's right, I've not think from this point of view. Maybe it could be done if you have to start from the probability of a win, computed at runtime for any parameter in actual position. It is an experiment that I want to do but only hell knows what it leads to... and hell is the house of satana!hgm wrote:Well, multiplying numbers is equivalent adding their logarithms. So it isn't really different. You just have to apply some logarithmic (i.e. saturating) correction to the evaluation.
Not that it matters very much, though, as any monotonous mapping of the scores will eventually give the same search. It only matters if you compare scores that have a different mapping. Or apply a fixed additive delayed-loss bonus to them. This actually makes some silly behavior (like sacrificing a Rook on ply 1 to avoid the loss of a Queen on ply 20) go away.
Probabilities can only be multiplied if they are independent, though, and most won't be.
For example, from an equal position, a queen odd would be worth almost infinity (you are almost surely going to win regardless of what the current eval is). But from a KPPPPvK situation, it would be worth about 1 (doesn't change the probability of winning, since the probability of winning was close to 1 to begin with).
I think that a single neural networks is not enough, for chess. Maybe many neural networks should be used, one for any aspect of the game. of course your work is still impressive and I hope that it would be only a starting point for you.