# Arbitrary Precision Optimization

Hi,

I am trying to perform a function minimization with BigFloat numbers. However, it seems that I cannot get it to work. I am using the BlackBoxOptim.jl package.
Is there a way to do that? As an example, I tried to do it with the Rosenbrock function.

``````function rosenbrock2d(x)
return (1 - x)^2 + 100 * (x - x^2)^2
end

res = bboptimize(rosenbrock2d; SearchRange = [(-BigFloat(5//3),BigFloat(5//3)),(-BigFloat(5//3),BigFloat(5//3))])
``````

This works, but the result is a Float64.
If I do

``````function rosenbrock2d(x)
return BigFloat((1 - x)^2 + 100 * (x - x^2)^2)
end

res = bboptimize(rosenbrock2d; SearchRange = [(-BigFloat(5//3),BigFloat(5//3)),(-BigFloat(5//3),BigFloat(5//3))])
``````

instead, I get the following error

``````ArgumentError: The supplied fitness function does NOT return the expected fitness type Float64when called with a potential solution (when called with [0.6116625671601585, -1.6325667883503008] it returned 402.83444589341110031455173157155513763427734375 of type BigFloat so we cannot optimize it!

Stacktrace:
 setup_problem(func::Function, parameters::ParamsDictChain)
@ BlackBoxOptim ~/.julia/packages/BlackBoxOptim/I3lfp/src/bboptimize.jl:40
 bbsetup(functionOrProblem::Function, parameters::Dict{Symbol, Any}; kwargs::Base.Pairs{Symbol, Vector{Tuple{BigFloat, BigFloat}}, Tuple{Symbol}, NamedTuple{(:SearchRange,), Tuple{Vector{Tuple{BigFloat, BigFloat}}}}})
@ BlackBoxOptim ~/.julia/packages/BlackBoxOptim/I3lfp/src/bboptimize.jl:111
 bboptimize(functionOrProblem::Function, parameters::Dict{Symbol, Any}; kwargs::Base.Pairs{Symbol, Vector{Tuple{BigFloat, BigFloat}}, Tuple{Symbol}, NamedTuple{(:SearchRange,), Tuple{Vector{Tuple{BigFloat, BigFloat}}}}})
@ BlackBoxOptim ~/.julia/packages/BlackBoxOptim/I3lfp/src/bboptimize.jl:92
 top-level scope
@ In:1
 eval
@ ./boot.jl:368 [inlined]
 include_string(mapexpr::typeof(REPL.softscope), mod::Module, code::String, filename::String)

``````

How can I solve? In case, I would be very glad if you could point me to other global optimization techniques too that allow me to use BigFloats.

Thanks a lot!

I don’t think that package supports arbitrary precision.

To me, this seems like a contradiction in terms. Black-box global optimization is extremely slow to begin with, and if you want to converge to more than 15 significant digits it will take impractically long.

If you merely need `BigFloat` precision for intermediate calculations in your objective (e.g. because you have a numerically unstable algorithm), then just convert the inputs from `Float64` to `BigFloat` on each call to your objective function, use as much precision as you need to get an accurate result, and then round the objective result back to `Float64` upon return.

(Local optimization to arbitrary precision is more practical to consider, but that uses totally different algorithms and you may have better luck finding Julia local-optimization code that supports generic arithmetic types.)

5 Likes