This depends how good the starting value is. If the starting value is already optimal, there won't be any improvement over it.zamar wrote:I think that answering the following question would be of a great practical importance: How many iterations do you need to CLOP outperform SPSA method with a good starting value? (You've already showed that this happens in infinity)Rémi Coulom wrote:
Using a wide range and letting CLOP focus on the right interval by itself is the most efficient. As I wrote, this is one advantage of CLOP over Stockfish's tuning method: the user of the algorithm does not have to guess good values for such parameters of the optimization algorithm. CLOP will figure out good values by itself.
In general, it is very difficult to guess the answer to this kind of question, especially since I have little experience with your algorithm. The best way would be to run an experiment.
Testing on artificial problems is most convenient, if you can do it. You can then compare to the plots in my paper. Otherwise, it would be interesting to test on a chess program.
This is how I would do it: for each algorithm, repeat a learning experiment 10-100 times, each time with the same total number of games, but with a different random seed. Then, plot, for each algorithm (and each replication of the experiment) the estimated optimal parameter as a function of time.
This would give a good comparison of how they behave.
But I have no time to do it.