NLP: Attribute ScalarNonlinearFunction is not supported by the model

Hi, I am working on an optimization problem as described in the PDF document in the GitHub repository. The nonlinearity is introduced mainly the equation G in (16).

I implemented the model following closely the PDF description with JuMP. The code and example data are all provided in the GitHub repository. Since there is a nonlinear term in the objective (10), I chosen the SCIP solver and also the Alpine solver. Both reported the error below:

ERROR: MathOptInterface.UnsupportedAttribute{MathOptInterface.ObjectiveFunction{MathOptInterface.ScalarNonlinearFunction}}: 
Attribute MathOptInterface.ObjectiveFunction{MathOptInterface.ScalarNonlinearFunction}() is not supported by the model.

After Googling, a suggestion is to replace the nonlinear term in the objective with an auxiliary variable (see f_obj_aux in the code nlp.jl), and put that term into the constraints. However, I still got a similar error:

ERROR: MathOptInterface.UnsupportedConstraint{MathOptInterface.ScalarNonlinearFunction, MathOptInterface.LessThan{Float64}}: `MathOptInterface.ScalarNonlinearFunction`-in-`MathOptInterface.LessThan{Float64}` constraint is not supported by the model.
  • Is it due to the solver? Perhaps there exists some solver capable of handling it.
  • Shall I reformulate the problem to make it a better-posed one? But how. This problem may be probably turned into a recognized form but I don’t know. I am no expert in optimization.
    Any suggestion is appreciated.
1 Like

You could try Ipopt.Optimizer, it should handle scalar nonlinear functions.

Thanks. But I want to get the global optimizer. :rofl:

It seems that SCIP cannot handle either nonlinear objectives or nonlinear constraints. Maybe try replacing it with a different solver (some suggestions given here)?

More tests.
None of SCIP, Alpine and AMPLWriter (with backends Couenne and SHOT) worked. All have the same error: they do not support objective functions involving MathOptInterface.ScalarNonlinearFunction.

Only Ipopt executed with no such errors. Are the above global NLP solvers limited?

Global optimization is not easy, so yes, the choice is limited. You can maybe try KNITRO+Alpine if you can get the license to it.

A much smaller example that shows the same error: The solver does not support an objective function of type MathOptInterface.ScalarNonlinearFunction.

using JuMP, SCIP, Ipopt, EAGO
using Alpine, HiGHS
using AmplNLWriter, Couenne_jll, SHOT_jll

function solve_model()
    model = JuMP.Model(SCIP.Optimizer)
    # model = JuMP.Model(() -> AmplNLWriter.Optimizer(Couenne_jll.amplexe))  # same error
    @variable(model, -2 <= x[1:4] <= 2)
    @constraint(model, x[1] * x[2] + x[3] <= 3)
    @constraint(model, -3 <= x[2] - 3*x[4] <= 2)
    @objective(model, Min, (x[1] + x[2]*x[4]) / (x[1]^2 + x[2]*x[3] - 2*x[4] + 100))
    optimize!(model)
    @show termination_status(model)
end


solve_model()

1 Like

The underlying cause is ANN: JuMP v1.15 is released

Some solvers have not been updated. So for SCIP and Alpine, you’ll need to use
@NLobjective instead of @objective.

AmplNLWriter with Couenne etc will work if you update your packages to AmplNLWriter v1.2.0.

@Shuhua,

The main issue lies with the objective function, which is a fractional function, and the second constraint, which is a two-sided constraint. Unfortunately, Alpine does not support either of these.

Here is a reformulated version in polynomial form, which seems to run fine without any issues on Alpine. The global optimal value for this objective is -0.0267602457. I chose arbitrary bounds for the variable t, but you could choose something better based on your specific problem.

I have kept the mip_solver as CPLEX, but you could change "mip_solver" => mip_solver to "mip_solver" => mip2_solver while initializing Alpine solver, if you prefer an open-source solver like Pavito+HiGHS. However, numerically, CPLEX/Gurobi seemed to be more stable and faster.

Let me know if you still see any issues.

using Alpine
using JuMP
using Ipopt
using CPLEX
using HiGHS
using Pavito

function get_cplex()
    return optimizer_with_attributes(
        CPLEX.Optimizer,
        MOI.Silent() => true,
        "CPX_PARAM_PREIND" => 1,
    )
end

function get_highs()
    return JuMP.optimizer_with_attributes(
        HiGHS.Optimizer,
        "presolve" => "on",
        "log_to_console" => false,
    )
end

function get_ipopt()
    return optimizer_with_attributes(
        Ipopt.Optimizer,
        MOI.Silent() => true,
        "sb" => "yes",
        "max_iter" => Int(1E4),
    )
end

function get_pavito(mip_solver, cont_solver)
    return optimizer_with_attributes(
        Pavito.Optimizer,
        MOI.Silent() => true,
        "mip_solver" => mip_solver,
        "cont_solver" => cont_solver,
        "mip_solver_drives" => false,
    )
end

nlp_solver = get_ipopt() # local continuous solver
mip_solver = get_cplex() # convex mip solver
mip2_solver = get_pavito(mip_solver, nlp_solver)

const alpine = JuMP.optimizer_with_attributes(
    Alpine.Optimizer,
    "nlp_solver" => nlp_solver,
    "mip_solver" => mip_solver,
    "presolve_bt" => true,
    "apply_partitioning" => true,
    "partition_scaling_factor" => 10,
)

function nlp_test(; solver = nothing)
    m = JuMP.Model(solver)

    @variable(m, -2 <= x[1:4] <= 2)
    @variable(m, -1E3 <= t <= 1E3)
    @constraint(m, x[1] * x[2] + x[3] <= 3)
    @constraint(m, x[2] - 3*x[4] <= 2)
    @constraint(m, x[2] - 3*x[4] >= -3)
    @NLconstraint(m, t * (x[1]^2 + x[2]*x[3] - 2*x[4] + 100) == x[1] + (x[2]*x[4]))
    @objective(m, Min, t)
    
    # Original fractional objective
    # @NLobjective(m, Min, (x[1] + x[2]*x[4]) / (x[1]^2 + x[2]*x[3] - 2*x[4] + 100))
    return m
end

m = nlp_test(solver = alpine)

JuMP.optimize!(m)
2 Likes

Thank you. I tried your suggestions with the example on my GitHub.

  • It is true that AmplNLWriter with Couenne etc will work if you update your packages to AmplNLWriter v1.2.0..
  • It failed even if use @NLobjective instead of @objective.. The following error is reported:
    ERROR: Unrecognized function ".*" used in nonlinear expression.
    
    You must register it as a user-defined function before building
    the model. 
    
    Following the prompts, I did
    register(model, :.*, 2, .*, autodiff=true)
    register(model, :.+, 2, .+, autodiff=true)
    
    Then, I got another error:
    ERROR: Unexpected array AffExpr[.....] in nonlinear expression. Nonlinear expressions may contain only scalar expressions.
    ``
    The above error happened in the line `@NLObjective`.

It failed even if use @NLobjective instead of @objective. . The following error is reported:

The legacy nonlinear interface (the @NL macros) contain a number of limitations. You cannot use broadcasting or array operations. See:

and the JuMP documentation:

If you can post a small reproducible example like you did above, people may be able to suggest alternative syntax.

Sorry, but I just edited my previous reply and got a different error.

If you can post a small reproducible example like you did above, people may be able to suggest alternative syntax.

I will try it. But for now, I can play with Couenne.

No, you cannot register the broadcast operators (at one point an error will be thrown saying that the return must be a scalar, but the other error was triggered first).

Nor can you use array operations. Everything must be a scalar expression. So, for example, instead of sum(x .* y) you must do sum(x[i] * y[i] for i in 1:N).

This is a major limitation, which is why we rewrote JuMP’s nonlinear interface :smile: Unfortunately, updating some of the solvers like Alpine to use the new interface is non-trivial, which is why they haven’t been updated yet.

But for now, I can play with Couenne.

:+1:

1 Like