It is often useful to move to log scale while searching for parameters in a wide range (1e-9, 1e9). In my case I am estimating parameters of ODE system in a range (1e-9, 1e9) using derivative-free optimization algorithms from NLopt. Moving to log scale (-9,9) gives good results for global methods, but slows down the computations for simplex methods (SBPLX, NELDERMEAD). Are there any recommendations for changing algorithm coefficients (alpha, gamma etc) when using log scale?
The simplex method family adapts slowly to changes in scale. You can change the expansion, contraction and shrink coefficients, eg \gamma=\kappa, \rho=\sigma=1/\kappa, where \kappa=2 is usually the default and you would need to increase it, but this may slow down convergence when the algorithm is approaching the optimum.
Figuring out how to use BFGS or similar, perhaps with automatic differentiation, may be a better option.
Is there a way to shange those coefficients in NLopt?
No, they are currently hardcoded in the C library.
In general, I would recommend other algorithms for most problems. Even if it is inconvenient to compute derivatives, I would still tend to try something like BOBYQA first.
Steven, thanks for your response.
Yes, I am aware that NELDERMEAD is not that reliable in some cases. However for my problems (ODE parameters estimation with inequality constraints) LN_AUGLAG + LN_NELDERMEAD finds good solution 10 times faster than BOBYQA/COBYLA and 5 times faster than SBPLX.