I know that JuMP does not support BigFloat; however Hypatia does. Is it possible to define a problem in JuMP, in Float64, but cause Hypatia to carry out the optimization in BigFloat? I am working on a problem, where the problem definition is adequately represented in Float64, but sometimes Hypatia has difficulty converging. I would like to try to increase the number of digits used by Hypatia to obtain a solution.
I don’t think it’s there: Generic numeric type in JuMP · Issue #2025 · jump-dev/JuMP.jl · GitHub
If the problem data is accurately represented in Float64
, then you don’t need Generic numeric type in JuMP · Issue #2025 · jump-dev/JuMP.jl · GitHub. Hypatia could expose an option to compute in a different precision than the input data. I’d suggest requesting this feature.
One option if you want to use JuMP to build the model is to extract the Hypatia model data before optimizing, then convert c, A, b, G, h and the cones to the numeric type you want, then use Hypatia’s native interface to optimize that model. You can get the Hypatia MOI optimizer (the object in src/mathoptinterface.jl) using Models · JuMP and then get the model field. You can make a new Hypatia Model object with the converted types and then solve as in Solving · Hypatia. The issue is you won’t be able to use JuMP to query the solutions of course.
I would suggest looking at https://github.com/JuliaMath/DoubleFloats.jl for some performance tips on selecting a higher precision type.
Talking to MOI/JuMP devs now - it’s seems to me to be a JuMP feature that needs to be supported rather than a Hypatia feature - Generic numeric type in JuMP · Issue #2025 · jump-dev/JuMP.jl · GitHub. Because Hypatia allows any MOI model in any real floating point type already (see our MOI tests at https://github.com/chriscoey/Hypatia.jl/blob/e3672d2f1ad1fc1418a4a81e5fe12df26e4ff1e2/test/runmoitests.jl#L35).
Thank you for all the answers, especially from Chris. I will try this work around.