NLP: Attribute ScalarNonlinearFunction is not supported by the model

Hi, I am working on an optimization problem as described in the PDF document in the GitHub repository. The nonlinearity is introduced mainly the equation G in (16).

I implemented the model following closely the PDF description with JuMP. The code and example data are all provided in the GitHub repository. Since there is a nonlinear term in the objective (10), I chosen the SCIP solver and also the Alpine solver. Both reported the error below:

ERROR: MathOptInterface.UnsupportedAttribute{MathOptInterface.ObjectiveFunction{MathOptInterface.ScalarNonlinearFunction}}: 
Attribute MathOptInterface.ObjectiveFunction{MathOptInterface.ScalarNonlinearFunction}() is not supported by the model.

After Googling, a suggestion is to replace the nonlinear term in the objective with an auxiliary variable (see f_obj_aux in the code nlp.jl), and put that term into the constraints. However, I still got a similar error:

ERROR: MathOptInterface.UnsupportedConstraint{MathOptInterface.ScalarNonlinearFunction, MathOptInterface.LessThan{Float64}}: `MathOptInterface.ScalarNonlinearFunction`-in-`MathOptInterface.LessThan{Float64}` constraint is not supported by the model.
  • Is it due to the solver? Perhaps there exists some solver capable of handling it.
  • Shall I reformulate the problem to make it a better-posed one? But how. This problem may be probably turned into a recognized form but I don’t know. I am no expert in optimization.
    Any suggestion is appreciated.
1 Like

You could try Ipopt.Optimizer, it should handle scalar nonlinear functions.

Thanks. But I want to get the global optimizer. :rofl:

It seems that SCIP cannot handle either nonlinear objectives or nonlinear constraints. Maybe try replacing it with a different solver (some suggestions given here)?

More tests.
None of SCIP, Alpine and AMPLWriter (with backends Couenne and SHOT) worked. All have the same error: they do not support objective functions involving MathOptInterface.ScalarNonlinearFunction.

Only Ipopt executed with no such errors. Are the above global NLP solvers limited?

Global optimization is not easy, so yes, the choice is limited. You can maybe try KNITRO+Alpine if you can get the license to it.

A much smaller example that shows the same error: The solver does not support an objective function of type MathOptInterface.ScalarNonlinearFunction.

using JuMP, SCIP, Ipopt, EAGO
using Alpine, HiGHS
using AmplNLWriter, Couenne_jll, SHOT_jll

function solve_model()
    model = JuMP.Model(SCIP.Optimizer)
    # model = JuMP.Model(() -> AmplNLWriter.Optimizer(Couenne_jll.amplexe))  # same error
    @variable(model, -2 <= x[1:4] <= 2)
    @constraint(model, x[1] * x[2] + x[3] <= 3)
    @constraint(model, -3 <= x[2] - 3*x[4] <= 2)
    @objective(model, Min, (x[1] + x[2]*x[4]) / (x[1]^2 + x[2]*x[3] - 2*x[4] + 100))
    optimize!(model)
    @show termination_status(model)
end


solve_model()

1 Like

The underlying cause is ANN: JuMP v1.15 is released

Some solvers have not been updated. So for SCIP and Alpine, you’ll need to use
@NLobjective instead of @objective.

AmplNLWriter with Couenne etc will work if you update your packages to AmplNLWriter v1.2.0.

@Shuhua,

The main issue lies with the objective function, which is a fractional function, and the second constraint, which is a two-sided constraint. Unfortunately, Alpine does not support either of these.

Here is a reformulated version in polynomial form, which seems to run fine without any issues on Alpine. The global optimal value for this objective is -0.0267602457. I chose arbitrary bounds for the variable t, but you could choose something better based on your specific problem.

I have kept the mip_solver as CPLEX, but you could change "mip_solver" => mip_solver to "mip_solver" => mip2_solver while initializing Alpine solver, if you prefer an open-source solver like Pavito+HiGHS. However, numerically, CPLEX/Gurobi seemed to be more stable and faster.

Let me know if you still see any issues.

using Alpine
using JuMP
using Ipopt
using CPLEX
using HiGHS
using Pavito

function get_cplex()
    return optimizer_with_attributes(
        CPLEX.Optimizer,
        MOI.Silent() => true,
        "CPX_PARAM_PREIND" => 1,
    )
end

function get_highs()
    return JuMP.optimizer_with_attributes(
        HiGHS.Optimizer,
        "presolve" => "on",
        "log_to_console" => false,
    )
end

function get_ipopt()
    return optimizer_with_attributes(
        Ipopt.Optimizer,
        MOI.Silent() => true,
        "sb" => "yes",
        "max_iter" => Int(1E4),
    )
end

function get_pavito(mip_solver, cont_solver)
    return optimizer_with_attributes(
        Pavito.Optimizer,
        MOI.Silent() => true,
        "mip_solver" => mip_solver,
        "cont_solver" => cont_solver,
        "mip_solver_drives" => false,
    )
end

nlp_solver = get_ipopt() # local continuous solver
mip_solver = get_cplex() # convex mip solver
mip2_solver = get_pavito(mip_solver, nlp_solver)

const alpine = JuMP.optimizer_with_attributes(
    Alpine.Optimizer,
    "nlp_solver" => nlp_solver,
    "mip_solver" => mip_solver,
    "presolve_bt" => true,
    "apply_partitioning" => true,
    "partition_scaling_factor" => 10,
)

function nlp_test(; solver = nothing)
    m = JuMP.Model(solver)

    @variable(m, -2 <= x[1:4] <= 2)
    @variable(m, -1E3 <= t <= 1E3)
    @constraint(m, x[1] * x[2] + x[3] <= 3)
    @constraint(m, x[2] - 3*x[4] <= 2)
    @constraint(m, x[2] - 3*x[4] >= -3)
    @NLconstraint(m, t * (x[1]^2 + x[2]*x[3] - 2*x[4] + 100) == x[1] + (x[2]*x[4]))
    @objective(m, Min, t)
    
    # Original fractional objective
    # @NLobjective(m, Min, (x[1] + x[2]*x[4]) / (x[1]^2 + x[2]*x[3] - 2*x[4] + 100))
    return m
end

m = nlp_test(solver = alpine)

JuMP.optimize!(m)
2 Likes

Thank you. I tried your suggestions with the example on my GitHub.

  • It is true that AmplNLWriter with Couenne etc will work if you update your packages to AmplNLWriter v1.2.0..
  • It failed even if use @NLobjective instead of @objective.. The following error is reported:
    ERROR: Unrecognized function ".*" used in nonlinear expression.
    
    You must register it as a user-defined function before building
    the model. 
    
    Following the prompts, I did
    register(model, :.*, 2, .*, autodiff=true)
    register(model, :.+, 2, .+, autodiff=true)
    
    Then, I got another error:
    ERROR: Unexpected array AffExpr[.....] in nonlinear expression. Nonlinear expressions may contain only scalar expressions.
    ``
    The above error happened in the line `@NLObjective`.

It failed even if use @NLobjective instead of @objective. . The following error is reported:

The legacy nonlinear interface (the @NL macros) contain a number of limitations. You cannot use broadcasting or array operations. See:

and the JuMP documentation:

If you can post a small reproducible example like you did above, people may be able to suggest alternative syntax.

Sorry, but I just edited my previous reply and got a different error.

If you can post a small reproducible example like you did above, people may be able to suggest alternative syntax.

I will try it. But for now, I can play with Couenne.

No, you cannot register the broadcast operators (at one point an error will be thrown saying that the return must be a scalar, but the other error was triggered first).

Nor can you use array operations. Everything must be a scalar expression. So, for example, instead of sum(x .* y) you must do sum(x[i] * y[i] for i in 1:N).

This is a major limitation, which is why we rewrote JuMP’s nonlinear interface :smile: Unfortunately, updating some of the solvers like Alpine to use the new interface is non-trivial, which is why they haven’t been updated yet.

But for now, I can play with Couenne.

:+1:

1 Like

I am trying to solve a binary nonlinear program with SCIP. The model is created with JuMP using the code below, but I get the error shown below. I am using Julia v1.10.3 and JuMP v1.22.1.


using JuMP
using SCIP

N1 = 5
N = 10
zp = 2
M = zp*N

Br = rand(M,N1)
Bi = rand(M,N1)
s = rand(M,1)

model = Model(SCIP.Optimizer)
@variable(model, y[1:N1], Bin)

@expression(model, yh[n=1:N1], y[n]-.5)
@expression(model, spec_r, Br * yh)
@expression(model, spec_i, Bi * yh)
@NLexpression(model, spec_sq_mag[m=1:M], spec_r[m]^2 + spec_i[m]^2)
@NLexpression(model, gamma, sum((spec_sq_mag[m]-s[m])^2 for m in 1:M))
@NLobjective(model, Min, gamma)

optimize!(model)

ERROR: LoadError: Nonlinear objective not supported by SCIP.jl!
Stacktrace:
[1] error(s::String)
@ Base ./error.jl:35
[2] set(o::SCIP.Optimizer, ::MathOptInterface.NLPBlock, data::MathOptInterface.NLPBlockData)
@ SCIP ~/.julia/packages/SCIP/S9mBb/src/MOI_wrapper/nonlinear_constraints.jl:10
[3] set(b::MathOptInterface.Bridges.LazyBridgeOptimizer{SCIP.Optimizer}, attr::MathOptInterface.NLPBlock, value::MathOptInterface.NLPBlockData)
@ MathOptInterface.Bridges ~/.julia/packages/MathOptInterface/2CULs/src/Bridges/bridge_optimizer.jl:955
[4] _pass_attribute(dest::MathOptInterface.Bridges.LazyBridgeOptimizer{SCIP.Optimizer}, src::MathOptInterface.Utilities.UniversalFallback{MathOptInterface.Utilities.Model{Float64}}, index_map::MathOptInterface.Utilities.IndexMap, attr::MathOptInterface.NLPBlock)
@ MathOptInterface.Utilities ~/.julia/packages/MathOptInterface/2CULs/src/Utilities/copy.jl:51
[5] pass_attributes(dest::MathOptInterface.Bridges.LazyBridgeOptimizer{SCIP.Optimizer}, src::MathOptInterface.Utilities.UniversalFallback{MathOptInterface.Utilities.Model{Float64}}, index_map::MathOptInterface.Utilities.IndexMap)
@ MathOptInterface.Utilities ~/.julia/packages/MathOptInterface/2CULs/src/Utilities/copy.jl:38
[6] default_copy_to(dest::MathOptInterface.Bridges.LazyBridgeOptimizer{SCIP.Optimizer}, src::MathOptInterface.Utilities.UniversalFallback{MathOptInterface.Utilities.Model{Float64}})
@ MathOptInterface.Utilities ~/.julia/packages/MathOptInterface/2CULs/src/Utilities/copy.jl:503
[7] copy_to
@ ~/.julia/packages/MathOptInterface/2CULs/src/Bridges/bridge_optimizer.jl:455 [inlined]
[8] optimize!
@ ~/.julia/packages/MathOptInterface/2CULs/src/MathOptInterface.jl:84 [inlined]
[9] optimize!(m::MathOptInterface.Utilities.CachingOptimizer{MathOptInterface.Bridges.LazyBridgeOptimizer{SCIP.Optimizer}, MathOptInterface.Utilities.UniversalFallback{MathOptInterface.Utilities.Model{Float64}}})
@ MathOptInterface.Utilities ~/.julia/packages/MathOptInterface/2CULs/src/Utilities/cachingoptimizer.jl:316
[10] optimize!(model::Model; ignore_optimize_hook::Bool, _differentiation_backend::MathOptInterface.Nonlinear.SparseReverseMode, kwargs::@Kwargs{})
@ JuMP ~/.julia/packages/JuMP/Gwn88/src/optimizer_interface.jl:457
[11] optimize!(model::Model)
@ JuMP ~/.julia/packages/JuMP/Gwn88/src/optimizer_interface.jl:409
[12] top-level scope
@ ~/julia_code/Gen7_BPM/simple_model.jl:23
in expression starting at /home/stuart/julia_code/Gen7_BPM/simple_model.jl:23

Per the error message, SCIP does not support nonlinear objective functions. You need to use an epigraph reformulation:

using JuMP
using SCIP
N1 = 5
N = 10
zp = 2
M = zp * N
Br = rand(M, N1)
Bi = rand(M, N1)
s = rand(M, 1)
model = Model(SCIP.Optimizer)
@variable(model, y[1:N1], Bin)
@expression(model, yh[n in 1:N1], y[n] - 0.5)
@expression(model, spec_r, Br * yh)
@expression(model, spec_i, Bi * yh)
@NLexpression(model, spec_sq_mag[m in 1:M], spec_r[m]^2 + spec_i[m]^2)
@variable(model, gamma)
@NLconstraint(model, gamma >= sum((spec_sq_mag[m]-s[m])^2 for m in 1:M))
@objective(model, Min, gamma)
optimize!(model)
1 Like

That fixed the error for SCIP, thanks. When I try to use Alpine instead, I get the following error.

using JuMP
#using SCIP

N1 = 5
N = 10
zp = 2
M = zp*N

Br = rand(M,N1)
Bi = rand(M,N1)
s = rand(M,1)

#model = Model(SCIP.Optimizer)

using JuMP, Alpine, Ipopt, HiGHS, Juniper
ipopt = optimizer_with_attributes(Ipopt.Optimizer, "print_level" => 0)
highs = optimizer_with_attributes(HiGHS.Optimizer, "output_flag" => false)
juniper =  optimizer_with_attributes(
        Juniper.Optimizer,
        MOI.Silent() => true,
        "mip_solver" => highs,
        "nl_solver" => ipopt,
    )

model = Model(
    optimizer_with_attributes(
        Alpine.Optimizer,
        "nlp_solver" => ipopt,
        "mip_solver" => highs,
        "minlp_solver" => juniper
    ),
)

@variable(model, y[1:N1], Bin)

@expression(model, yh[n=1:N1], y[n]-.5)
@expression(model, spec_r, Br*yh)
@expression(model, spec_i, Bi*yh)
@NLexpression(model, spec_sq_mag[m in 1:M], spec_r[m]^2 + spec_i[m]^2)
@variable(model, gamma)
@NLconstraint(model, gamma >= sum((spec_sq_mag[m]-s[m])^2 for m in 1:M))
@objective(model, Min, gamma)

optimize!(model)

ERROR: LoadError: type Symbol has no field head
Stacktrace:
[1] getproperty
@ ./Base.jl:37 [inlined]
[2] traverse_expr_linear_to_affine(expr::Symbol, lhscoeffs::Vector{Any}, lhsvars::Vector{Any}, rhs::Float64, bufferVal::Nothing, bufferVar::Nothing, sign::Float64, coef::Float64, level::Int64)
@ Alpine ~/.julia/packages/Alpine/2DP5q/src/nlexpr.jl:356
[3] traverse_expr_linear_to_affine(expr::Expr, lhscoeffs::Vector{Any}, lhsvars::Vector{Any}, rhs::Float64, bufferVal::Nothing, bufferVar::Nothing, sign::Float64, coef::Float64, level::Int64) (repeats 4 times)
@ Alpine ~/.julia/packages/Alpine/2DP5q/src/nlexpr.jl:374
[4] traverse_expr_linear_to_affine(expr::Expr)
@ Alpine ~/.julia/packages/Alpine/2DP5q/src/nlexpr.jl:332
[5] expr_linear_to_affine(expr::Expr)
@ Alpine ~/.julia/packages/Alpine/2DP5q/src/nlexpr.jl:287
[6] expr_conversion(m::Alpine.Optimizer)
@ Alpine ~/.julia/packages/Alpine/2DP5q/src/nlexpr.jl:103
[7] process_expr(m::Alpine.Optimizer)
@ Alpine ~/.julia/packages/Alpine/2DP5q/src/nlexpr.jl:10
[8] load!(m::Alpine.Optimizer)
@ Alpine ~/.julia/packages/Alpine/2DP5q/src/main_algorithm.jl:110
[9] optimize!(m::Alpine.Optimizer)
@ Alpine ~/.julia/packages/Alpine/2DP5q/src/main_algorithm.jl:151
[10] optimize!
@ ~/.julia/packages/MathOptInterface/2CULs/src/Bridges/bridge_optimizer.jl:380 [inlined]
[11] optimize!
@ ~/.julia/packages/MathOptInterface/2CULs/src/MathOptInterface.jl:85 [inlined]
[12] optimize!(m::MathOptInterface.Utilities.CachingOptimizer{MathOptInterface.Bridges.LazyBridgeOptimizer{Alpine.Optimizer}, MathOptInterface.Utilities.UniversalFallback{MathOptInterface.Utilities.Model{Float64}}})
@ MathOptInterface.Utilities ~/.julia/packages/MathOptInterface/2CULs/src/Utilities/cachingoptimizer.jl:316
[13] optimize!(model::Model; ignore_optimize_hook::Bool, _differentiation_backend::MathOptInterface.Nonlinear.SparseReverseMode, kwargs::@Kwargs{})
@ JuMP ~/.julia/packages/JuMP/Gwn88/src/optimizer_interface.jl:457
[14] optimize!(model::Model)
@ JuMP ~/.julia/packages/JuMP/Gwn88/src/optimizer_interface.jl:409
[15] top-level scope
@ ~/julia_code/Gen7_BPM/simple_model.jl:44
in expression starting at /home/stuart/julia_code/Gen7_BPM/simple_model.jl:44

I’ve seen that before, but I don’t know the cause: ERROR: type Symbol has no field head · Issue #226 · lanl-ansi/Alpine.jl · GitHub

When I try to use MAiNGO (as per the code below) to solve that problem, it never seems to finish and repeatedly prints the following warning message.

using MAiNGO
model=Model(optimizer_with_attributes(MAiNGO.Optimizer, "epsilonA"=> 1e-8))

Warning: Could not retrieve Farkas’ values from CLP. Continuing with parent LBD…

I have a MWE for Alpine: ERROR: type Symbol has no field head · Issue #226 · lanl-ansi/Alpine.jl · GitHub

I have no experience using MAiNGO. You should contact their developers.

When I write the epigraph formulation of the model using the legacy interface to a .nl file, SCIP generates an error when I read that model in from the .nl file and then try to solve it. Is it possible for JuMP to formulate the legacy interface model when it reads in the .nl file?

write_to_file(model,"model.nl")

using JuMP

model = read_from_file("model.nl")

using SCIP
set_optimizer(model, SCIP.Optimizer)

optimize!(model)

$ julia solve_model_from_file.jl
ERROR: LoadError: Nonlinear objective not supported by SCIP.jl!
Stacktrace:
[1] error(s::String)
@ Base ./error.jl:35
[2] set(o::SCIP.Optimizer, ::MathOptInterface.NLPBlock, data::MathOptInterface.NLPBlockData)
@ SCIP ~/.julia/packages/SCIP/XjNY6/src/MOI_wrapper/nonlinear_constraints.jl:10
[3] set(b::MathOptInterface.Bridges.LazyBridgeOptimizer{SCIP.Optimizer}, attr::MathOptInterface.NLPBlock, value::MathOptInterface.NLPBlockData)
@ MathOptInterface.Bridges ~/.julia/packages/MathOptInterface/2CULs/src/Bridges/bridge_optimizer.jl:955
[4] _pass_attribute(dest::MathOptInterface.Bridges.LazyBridgeOptimizer{SCIP.Optimizer}, src::MathOptInterface.Utilities.UniversalFallback{MathOptInterface.Utilities.Model{Float64}}, index_map::MathOptInterface.Utilities.IndexMap, attr::MathOptInterface.NLPBlock)
@ MathOptInterface.Utilities ~/.julia/packages/MathOptInterface/2CULs/src/Utilities/copy.jl:51
[5] pass_attributes(dest::MathOptInterface.Bridges.LazyBridgeOptimizer{SCIP.Optimizer}, src::MathOptInterface.Utilities.UniversalFallback{MathOptInterface.Utilities.Model{Float64}}, index_map::MathOptInterface.Utilities.IndexMap)
@ MathOptInterface.Utilities ~/.julia/packages/MathOptInterface/2CULs/src/Utilities/copy.jl:38
[6] default_copy_to(dest::MathOptInterface.Bridges.LazyBridgeOptimizer{SCIP.Optimizer}, src::MathOptInterface.Utilities.UniversalFallback{MathOptInterface.Utilities.Model{Float64}})
@ MathOptInterface.Utilities ~/.julia/packages/MathOptInterface/2CULs/src/Utilities/copy.jl:503
[7] copy_to
@ ~/.julia/packages/MathOptInterface/2CULs/src/Bridges/bridge_optimizer.jl:455 [inlined]
[8] optimize!
@ ~/.julia/packages/MathOptInterface/2CULs/src/MathOptInterface.jl:84 [inlined]
[9] optimize!(m::MathOptInterface.Utilities.CachingOptimizer{MathOptInterface.Bridges.LazyBridgeOptimizer{SCIP.Optimizer}, MathOptInterface.Utilities.UniversalFallback{MathOptInterface.Utilities.Model{Float64}}})
@ MathOptInterface.Utilities ~/.julia/packages/MathOptInterface/2CULs/src/Utilities/cachingoptimizer.jl:316
[10] optimize!(model::Model; ignore_optimize_hook::Bool, _differentiation_backend::MathOptInterface.Nonlinear.SparseReverseMode, kwargs::@Kwargs{})
@ JuMP ~/.julia/packages/JuMP/Gwn88/src/optimizer_interface.jl:457
[11] optimize!(model::Model)
@ JuMP ~/.julia/packages/JuMP/Gwn88/src/optimizer_interface.jl:409
[12] top-level scope
@ ~/julia_code/Gen7_BPM/solve_model_from_file.jl:8
in expression starting at /home/stuart/julia_code/Gen7_BPM/solve_model_from_file.jl:8