Use low precision float type in JuMP

I’m working with a large linear SDP problem with JuMP and COSMO and is trying to find whether low float precision contributes to the convergence problem. So I want to change the float type of a smaller problem that works well with Float64 to see whether the convergence problem recurs. Here is what I did:

model = Model(COSMO.Optimizer{Float16})
# And pass to the model Float16 version of constraints and objective function

However, an error occurs:

ERROR: MathOptInterface.UnsupportedAttribute{MathOptInterface.ObjectiveFunction{MathOptInterface.ScalarAffineFunction{Float64}}}: Attribute MathOptInterface.ObjectiveFunction{MathOptInterface.ScalarAffineFunction{Float64}}() is not supported by the model.

So I check the type of the objective function, and it seems the sum of two VariableRefs is of type GenericAffExpr{Float64,VariableRef} by default, which explains the error.

So, is there any method to work around the problem? Can I build an objective function with type GenericAffExpr{Float16,VariableRef}?

1 Like

JuMP currently only supports Float64: Generic numeric type in JuMP · Issue #2025 · jump-dev/JuMP.jl · GitHub. JuMP can be seen as a front-end to MathOptInterface (MOI) which does support generic number types. So depending on the problem you could try to use MOI directly or another front end (like Convex). But it might not be easy or work as well.

2 Likes

Note that according to Arbitrary Precision · COSMO.jl COSMO supports Float64 and Float32 for SDP but not Float16.

IMO solvers like COSMO should have an option to solve in a precision that’s different from the precision of the input, but until that’s supported, you’ll need to find a workaround using the MOI-level API.

2 Likes