Long & complex non-linear functions use in non-linear constraints

Hi There,

I am new to JuMP and I am trying to set-up an optimization model that I already made work in Matlab, but now taking many advantages of JuMP

Out of many questions I have I would start with few simple ones, hoping one of the experts can guide.

I have created a number of variables, and passed then through number of stepps that automatically created an affine (vector) expression named Fdry (all below mentioned variables in AffExpr are also defined in the model)

10-element Array{JuMP.GenericAffExpr{Float64,JuMP.Variable},1}:
5.7839120370370365 FH2 + 0.05155351783058837 FHOG + 3 EpsSGP[1] + EpsSGP[2]
0.0004050925925925926 FH2 + 0.10310703566117674 FHOG + EpsSGP[1] - EpsSGP[2]
0.6483910407410187 FNG + 0.0008680555555555554 FH2 + 0.38398348918167985 FHOG - 0.18084490740740738 FO2 - EpsSGP[1]
0
0
0
0.026172700828610014 FNG + 0.0018518518518518517 FH2 + 0.021742696145050644 FHOG + 0.18084490740740738 FO2 + EpsSGP[2]
0.6430041152263374 FSteam + 0.36168981481481477 FO2 - 0.04105095814611863 FNG - 0.034102652044934206 FHOG - EpsSGP[1] - EpsSGP[2]
0.02811199401389529 FNG + 0.07490726140784491 FHOG
0

As this is a vector expression, I want to use it to define number of non-linear constraints, in the following way

for i = 1:10
@NLconstraint(m, ySG[i]*sum(Fdry[j] for j=1:10) - Fdry[i] == 0 )
end

where ySG[1:10] is also defined as a variable.

now, this gives me following error :
MethodError: no method matching parseNLExpr_runtime(::JuMP.Model, ::JuMP.GenericAffExpr{Float64,JuMP.Variable}, ::Array{ReverseDiffSparse.NodeData,1}, ::Int64, ::Array{Float64,1})

is there a way to circumvent this? Are AffExpr{Float64,JuMP.Variable} not allowed to be used in NL constraints?

Fdry vector involves quite some linear algebra that I skipped (I just showed the last result), and spelling it out in scalar form will be tedious work. I am sure I am missing something.

Thanks,

Stan

Correct, this is not currently supported. See the discussion at Nonlinear Modeling — JuMP -- Julia for Mathematical Optimization 0.18 documentation.

Hi Miles,

I’ve re-written the code in scalar form (avoiding any vector operation) and I introduced aux variables where needed. Now definition of constraints and objective function is not a problem.

I also use few external functions (in scalar form), and I register them within the model. When I come to solve step I get the follwoing error :

no method matching fun1(::Float64, ::Float64, ::Float64, ::Float64, ::Float64, ::Float64, ::Float64, ::Float64)
Closest candidates are:
fun1(::Any, ::Any, ::Any, ::Any, ::Any, ::Any, ::Any)

I would assume ::Any type is more general than Float64, and that function should be able to work, but if not, how to I define function to strictly work on Float64?

Note the differing number of arguments.

1 Like

Got it, fixed it! It was a typo indeed.

I feel I am getting there :slight_smile: … Now I get a vague response when going to solve(m) :

ArgumentError: invalid NLopt arguments
in solve at JuMP\src\solvers.jl:150
in #solve#116 at JuMP\src\solvers.jl:172
in at base<missing>
in #solvenlp#165 at JuMP\src\nlp.jl:1271
in optimize! at NLopt\src\NLoptSolverInterface.jl:203
in optimize! at NLopt\src\NLopt.jl:529
in chks at NLopt\src\NLopt.jl:276
in chk at NLopt\src\NLopt.jl:259

I want to use SQP algorithm : m = Model(solver=NLoptSolver(algorithm=:LD_SLSQP))

Can we see the block of code, including the function and constraints you’re passing to JuMP? I think there is some nuance to what can be passed to NLopt: see, for example, https://github.com/JuliaOpt/NLopt.jl/issues/76