Desperado wrote:hi Bob,
what do you think ?
1:
iterating,researching will enforce/weaken the _highMobility_ effect ?!
I suspect "no effect." If you re-search, you search a different tree with different evals anyway, even if the moves are the same.
2:
pruning techniques will enforce/weaken the _highMobility_ effect ?!
Have to weaken it. Trivial case is a search with branching factor of 1, so that at every ply you only look at one move. No way for this to emulate mobility there.
3:
the choice of the random function will influence the _highMobility_ effect?!
I think all that is needed is decent randomness with a uniform distribution.
4: qSearch issues
a:
the role of qsSearch is a very important difference (imho), because
mobility will not only be added(sampled) on a fixed depth
but over the plies. (the longer the line, the higher the collected mobility?!) ?!
b:
We are not really "collecting" mobility. we are just backing up a probability that one branch has higher mobility for us and less for the opponent, because of how the random numbers are sampled and backed up as if they were scores. You don't want narrow, deep branches. That will break the basic concept of more branches at a node = greater probability of getting a good random number.
strongly different search depths are reached with the different
random evaluation types because of the standpat condition.
that may lead to significant change in strength ?!
(see some posts before if you like)
Good question. But I'm not an expert in this topic (I doubt there is an expert, in fact, since it not particularly useful for high strength applications.) I tried two approaches, 0-99, -99 to 99, and did not find any statistical difference in strength.
c:
there is no constant ratio between tactical moves and
quiet moves (or is there ?) , so mobility/tacticalmobility
is sth. different ?!
I'm not quite sure what "tactical mobility" means.
5:
all these questions let me think we are talking of a _craftyEffect_ but
_not_ of the _BealEffect_, because the (pre)conditions are _completly_ different.
Different how? Crafty's search and Beal's search are quite similar when you use skill=1 in crafty, since all the selective stuff goes away. The tree becomes fixed depth with no extensions or reductions, and no pruning. I am not quite sure what you mean by "completely different."
- no standard minimax (alphaBeta _can_ sample in another way?!)
Simple idea: if you order trees worst-first, alpha/beta and minimax search the _same_ tree exactly. Alpha/beta depends on good ordering to beat minimax. With a random evaluation, there is no way to produce "good ordering" so I do not see this as a significant difference.
- no measurement of mobility but tacticalMobility
- collecting "mobility" over the plies (no fixed depth comparison
of the mobility (qsearch),which is simply different way to sample
the mobility quantity)
The term "tactical mobility" doesn't mean anything to me. But in crafty, there _is_ a fixed depth search, since skill=1 turns all pruning, reductions and extensions completely off...
- iteration,researching,extension,pruning issues
do you agree ?
regards
ps: although i am asking directly bob, of course everyones statement is very welcome. thx
Clearly with a random search, the concept of a "re-search" is not quite the same, but the effect can be. A fail high could mean you found a path that leads to greater mobility. A research won't produce the same values, but you should see a similar result since there really is more mobility. Iterative search does have an effect, since you will always search the move with greatest mobility first, because of the previous iteration. Whether it pays off or not is unknown. Since pruning (except for alpha/beta) is disabled at skill=1, that isn't an issue for the case I have been looking at.