How to register a function with parameter in JuMP

Here I have some variables and parameters

model = Model(Ipopt.Optimizer)
@variable(model, x[1:2])
p = [4, 6]

I want to register a function with parameter in JuMP to do something like this bellow.

function f(x, p)
    return (x - p)^2
end

register(model,:f, 2,f, ∇f)

@NLobjective(model, Min, sum(f(x[i], p[i]) for i in 1:2))
1 Like

Please read the first post of Please read: make it easier to help you - #81.

Use an @NLparameter for p:

model = Model(Ipopt.Optimizer)
@variable(model, x >= 0)
@NLparameter(model, p == 1)
f(x, p) = x + p
register(model, :f, 2, f; autodiff = true)
@NLobjective(model, Min, f(x, p))

julia> optimize!(model)

******************************************************************************
This program contains Ipopt, a library for large-scale nonlinear optimization.
 Ipopt is released as open source code under the Eclipse Public License (EPL).
         For more information visit http://projects.coin-or.org/Ipopt
******************************************************************************

This is Ipopt version 3.13.2, running with linear solver mumps.
NOTE: Other linear solvers might be more efficient (see Ipopt documentation).

Number of nonzeros in equality constraint Jacobian...:        0
Number of nonzeros in inequality constraint Jacobian.:        0
Number of nonzeros in Lagrangian Hessian.............:        0

Total number of variables............................:        1
                     variables with only lower bounds:        1
                variables with lower and upper bounds:        0
                     variables with only upper bounds:        0
Total number of equality constraints.................:        0
Total number of inequality constraints...............:        0
        inequality constraints with only lower bounds:        0
   inequality constraints with lower and upper bounds:        0
        inequality constraints with only upper bounds:        0

iter    objective    inf_pr   inf_du lg(mu)  ||d||  lg(rg) alpha_du alpha_pr  ls
   0  1.0100000e+00 0.00e+00 0.00e+00   0.0 0.00e+00    -  0.00e+00 0.00e+00   0
   1  1.0001000e+00 0.00e+00 9.90e-03  -8.0 9.90e-03    -  1.00e+00 1.00e+00f  1
   2  1.0000010e+00 0.00e+00 1.01e-04 -10.0 1.01e-04    -  1.00e+00 9.80e-01f  1
   3  9.9999999e-01 0.00e+00 9.90e-07 -11.0 9.90e-07    -  1.00e+00 1.00e+00f  1
   4  9.9999999e-01 0.00e+00 9.00e-11 -11.0 9.00e-11    -  1.00e+00 1.00e+00f  1

Number of Iterations....: 4

                                   (scaled)                 (unscaled)
Objective...............:   9.9999999000999995e-01    9.9999999000999995e-01
Dual infeasibility......:   8.9989682372504376e-11    8.9989682372504376e-11
Constraint violation....:   0.0000000000000000e+00    0.0000000000000000e+00
Complementarity.........:   9.9999109013750533e-12    9.9999109013750533e-12
Overall NLP error.......:   8.9989682372504376e-11    8.9989682372504376e-11


Number of objective function evaluations             = 5
Number of objective gradient evaluations             = 5
Number of equality constraint evaluations            = 0
Number of inequality constraint evaluations          = 0
Number of equality constraint Jacobian evaluations   = 0
Number of inequality constraint Jacobian evaluations = 0
Number of Lagrangian Hessian evaluations             = 0
Total CPU secs in IPOPT (w/o function evaluations)   =      1.509
Total CPU secs in NLP function evaluations           =      0.022

EXIT: Optimal Solution Found.

julia> value(x)
0.0

julia> objective_value(model)
0.99999999001
1 Like

Thanks a lot for your solution. It works well.
Also in appreciation of your advice, I will update my question.

model = Model(optimizer_with_attributes(Ipopt.Optimizer))
p0 = [4,6]
@variable(model,x[1:2])
@NLparameter(model, p[i in 1:2] == p0[i])

function f(x, p)
    return (x - p)^2
end

function ∇f(g, x, p)
    g[1] = 2 * x - 2 * p
    # g[2] = 2 * p - 2 * x
end

register(model,:f, 2,f, ∇f)
@NLobjective(model, Min, sum(f(x[i], p[i]) for i in 1:2))
optimize!(model)
value.(x)
--------------------------------------------------------
EXIT: Optimal Solution Found.
2-element Array{Float64,1}:
 4.0
 6.0