Can I consider variables as Float32 or Float16 in JuMP

Hi all,

I’m working on an optimization problem using JuMP with IPOPT, and I noticed that by default, variables are declared as Float64. I would like to explore using Float32 and Float16 for my variables instead. Additionally, I am interested in using the StochasticRounding package, specifically to cast variables as stochastic_round(Float16, x).

Is there a recommended way to configure JuMP and IPOPT to use these lower-precision types? Also, how can I integrate stochastic rounding into this process?

Thanks in advance for any suggestions!

1 Like

Regarding JuMP.jl, this seems like what you want:

Another option is to use MathOptInterface.jl.

But I’m not sure what arithmetics are supported by the ipopt solver. You might have better luck with a solver implemented in Julia?

Hi @Aassis_b, welcome to the forum.

No, Ipopt supports only Float64.