Optimization routines for Bigfloat/doubledouble/higher precision objective and gradient functions?

Are there Julia optimizers that operate on bigfloat/doubledouble/higher precision objective and gradient functions? I am specifically interested in a L-BGFS functionality for higher that double precision.

Optim.jl. Just use bigfloats for the input:

julia> using Optim

julia> rosenbrock(x) =  (1 - x[1])^2 + 100 * (x[2] - x[1]^2)^2
rosenbrock (generic function with 1 method)

julia> result = optimize(rosenbrock, zeros(BigFloat,2), BFGS())
Results of Optimization Algorithm
 * Algorithm: BFGS
 * Starting Point: [0.000000000000000000000000000000000000000000000000000000000000000000000000000000,0.000000000000000000000000000000000000000000000000000000000000000000000000000000, ...]
 * Minimizer: [9.999999999373609692714773387925189802216770854901569767520733029701888805537488e-01,9.999999998686212383277986098067549626040084142404058234279353112630934671007151e-01, ...]
 * Minimum: 7.645502e-21
 * Iterations: 16
 * Convergence: true
   * |x - x'| < 1.0e-32: false
     |x - x'| = 3.48e-07
   * |f(x) - f(x')| / |f(x)| < 1.0e-32: false
     |f(x) - f(x')| / |f(x)| = 9.03e+06
   * |g(x)| < 1.0e-08: true
     |g(x)| = 2.32e-09
   * Stopped by an increasing objective: false
   * Reached Maximum Number of Iterations: false
 * Objective Calls: 53
 * Gradient Calls: 53

then just change the tolerances.

4 Likes

L-BFGS shold work as well, LBFGS(), if not, it’s a bug

Edit:

Here it is with L-BFGS. Notice, that there appears to be some annoyances involved. All the tolerances have to be set, because they parametrically bound to be the same type, and we do not promote in the Options constructor. SAD. I’ll fix it. Also, the time_limit option is parametrically bound to the tolerance type - not intended either.

julia> optimize(rosenbrock, zeros(BigFloat,2), LBFGS(), Optim.Options(g_tol=big(1e-22), x_tol=big(1e-22), f_tol=big(1e-22), time_limit=big(12.0)))
Results of Optimization Algorithm
 * Algorithm: L-BFGS
 * Starting Point: [0.000000000000000000000000000000000000000000000000000000000000000000000000000000,0.000000000000000000000000000000000000000000000000000000000000000000000000000000, ...]
 * Minimizer: [1.000000000000000000000000000000000010666098084249484950300979377269148279250591,1.000000000000000000000000000000000021348772994712052750399835560457610972025756, ...]
 * Minimum: 1.137931e-70
 * Iterations: 26
 * Convergence: true
   * |x - x'| ≤ 1.0e-22: true 
     |x - x'| = 1.02e-24 
   * |f(x) - f(x')| ≤ 1.0e-22 |f(x)|: false
     |f(x) - f(x')| = 1.93e+24 |f(x)|
   * |g(x)| ≤ 1.0e-22: true 
     |g(x)| = 1.47e-35 
   * Stopped by an increasing objective: false
   * Reached Maximum Number of Iterations: false
 * Objective Calls: 73
 * Gradient Calls: 73

Edit 2:
https://github.com/JuliaNLSolvers/Optim.jl/pull/509

4 Likes

Actually, I found out this worked a while back because Chris mentioned that he wasn’t sure if Optim supported it, so I just tried it out. As many people have found out before me, this is one of the great strength of Julia. You just avoid to type stuff too tightly, and then you’re given stuff like (performant!) BigFloat support for free without even having it as a development goal. Good stuff.

5 Likes