Hi @Sushrut_Deshpande, welcome to JuMP!
See the documentation: Nonlinear Modeling · JuMP
The correct syntax is:
julia> using JuMP, Ipopt
julia> f(x...) = x[1]^2 + x[2]^2
f (generic function with 1 method)
julia> function ∇f(g, x...)
g[1] = 2*x[1]
g[2] = 2*x[2]
end
∇f (generic function with 1 method)
julia> function ∇²f(H, x...)
H[1, 1] = 2.0
H[2, 2] = 2.0
end
∇²f (generic function with 1 method)
julia> model = Model(Ipopt.Optimizer)
A JuMP Model
├ solver: Ipopt
├ objective_sense: FEASIBILITY_SENSE
├ num_variables: 0
├ num_constraints: 0
└ Names registered in the model: none
julia> @operator(model, op_fun, 2, f, ∇f, ∇²f)
NonlinearOperator(f, :op_fun)
julia> lb = [-1.0, -1.0]
2-element Vector{Float64}:
-1.0
-1.0
julia> ub = [1.0, 1.0]
2-element Vector{Float64}:
1.0
1.0
julia> @variable(model, lb[i] <= x[i in 1:2] <= ub[i])
2-element Vector{VariableRef}:
x[1]
x[2]
julia> @objective(model, Min, op_fun(x...))
op_fun(x[1], x[2])
julia> optimize!(model)
******************************************************************************
This program contains Ipopt, a library for large-scale nonlinear optimization.
Ipopt is released as open source code under the Eclipse Public License (EPL).
For more information visit https://github.com/coin-or/Ipopt
******************************************************************************
This is Ipopt version 3.14.17, running with linear solver MUMPS 5.7.3.
Number of nonzeros in equality constraint Jacobian...: 0
Number of nonzeros in inequality constraint Jacobian.: 0
Number of nonzeros in Lagrangian Hessian.............: 3
Total number of variables............................: 2
variables with only lower bounds: 0
variables with lower and upper bounds: 2
variables with only upper bounds: 0
Total number of equality constraints.................: 0
Total number of inequality constraints...............: 0
inequality constraints with only lower bounds: 0
inequality constraints with lower and upper bounds: 0
inequality constraints with only upper bounds: 0
iter objective inf_pr inf_du lg(mu) ||d|| lg(rg) alpha_du alpha_pr ls
0 0.0000000e+00 0.00e+00 0.00e+00 -1.0 0.00e+00 - 0.00e+00 0.00e+00 0
1 0.0000000e+00 0.00e+00 0.00e+00 -1.7 0.00e+00 - 1.00e+00 1.00e+00 0
2 0.0000000e+00 0.00e+00 0.00e+00 -3.8 0.00e+00 - 1.00e+00 1.00e+00T 0
3 0.0000000e+00 0.00e+00 0.00e+00 -5.7 0.00e+00 - 1.00e+00 1.00e+00T 0
4 0.0000000e+00 0.00e+00 0.00e+00 -8.6 0.00e+00 - 1.00e+00 1.00e+00T 0
Number of Iterations....: 4
(scaled) (unscaled)
Objective...............: 0.0000000000000000e+00 0.0000000000000000e+00
Dual infeasibility......: 0.0000000000000000e+00 0.0000000000000000e+00
Constraint violation....: 0.0000000000000000e+00 0.0000000000000000e+00
Variable bound violation: 0.0000000000000000e+00 0.0000000000000000e+00
Complementarity.........: 2.5059035596800808e-09 2.5059035596800808e-09
Overall NLP error.......: 2.5059035596800808e-09 2.5059035596800808e-09
Number of objective function evaluations = 5
Number of objective gradient evaluations = 5
Number of equality constraint evaluations = 0
Number of inequality constraint evaluations = 0
Number of equality constraint Jacobian evaluations = 0
Number of inequality constraint Jacobian evaluations = 0
Number of Lagrangian Hessian evaluations = 4
Total seconds in IPOPT = 1.762
EXIT: Optimal Solution Found.
But if function tracing works, you can do:
using JuMP, Ipopt
f(x...) = x[1]^2 + x[2]^2
model = Model(Ipopt.Optimizer)
lb = [-1.0, -1.0]
ub = [1.0, 1.0]
@variable(model, lb[i] <= x[i in 1:2] <= ub[i])
@objective(model, Min, my_fun(x...))
optimize!(model)
but even simpler, you can do:
using JuMP, Ipopt
model = Model(Ipopt.Optimizer);
@variable(model, -1 <= x[i in 1:2] <= 1)
@objective(model, Min, x[1]^2 + x[2]^2)
optimize!(model)