Your first example works because you have mixed the legacy nonlinear syntax register
with the new function tracing syntax. So when you did:
julia> @objective(model, Min, norm_diff(A...))
10 A[1,1]² + 30 A[1,2]*A[1,1] + 25 A[1,2]² + 10 A[2,1]² + 30 A[2,2]*A[2,1] + 25 A[2,2]² - 2 A[1,1] - 6 A[1,2] - 6 A[2,1] - 8 A[2,2] + 2
we actually evaluated norm_diff
with JuMP VariableRef
as the input and got back an algebraic expression. We did not use the registered user-defined operator.
The correct syntax is:
model = Model(Ipopt.Optimizer)
@variable(model, A[1:n,1:n])
@operator(model, op_norm_diff, n^2, norm_diff)
@objective(model, Min, op_norm_diff(A...))
optimize!(model)
For your second function, it looks pretty good. But the JuMP code should be:
model = Model(Ipopt.Optimizer)
@variable(model, A[1:n,1:n])
@operator(model, op_f, n^2, f)
@objective(model, Min, op_f(A...))
optimize!(model)
This works but clearly f
is type-unstable which makes its much slower than needed
Do you have evidence that the type stability makes Ipopt run slower? It probably doesn’t really matter.
I would tweak your example to be:
norm_diff_data2(A, M, I_n) = sum((A * M - I_n).^2)
g(args...) = norm_diff_data2(reshape(collect(args), n, n), M, I_n)
model = Model(Ipopt.Optimizer)
@variable(model, A[1:n,1:n])
@operator(model, op_g, n^2, g)
@objective(model, Min, op_g(A...))
optimize!(model)
but I might write it as
norm_diff_data2(A, M, I_n) = sum((A * M - I_n).^2)
model = Model(Ipopt.Optimizer)
@variable(model, A[1:n,1:n])
@objective(model, Min, norm_diff_data2(A, M, I_n))
optimize!(model)
or even
model = Model(Ipopt.Optimizer)
@variable(model, A[1:n,1:n])
@variable(model, diff[1:n,1:n])
@constraint(model, diff .== A * M .- I_n)
@objective(model, Min, sum(diff.^2))
optimize!(model)
Sorry for the delayed response. It did not allow me to edit the post at this point so my reply will contain the reproducible example.
No worries
we’re not StackOverflow, so it’s even encouraged to post replies rather than editing old posts.