Question regarding size of variable with JuMP and Ipopt

Hello everybody,

I am new to JuMP and optimization using Julia and I am wondering if someone could explain to me why the following occurs. If I run the toy problem below with size_overall = 100 it takes some time, call it t1. The second time I do so it takes not even a second. Now if I rerun the code with size_overall = 1000 it again takes time even longer than t1. Reruning it again with size_overall = 100 takes again only seconds. I have also played around with bigger and smaller sizes.

What I need to do with a similar code is to run it consequetively many times with the size of x changing each time. However from this toy example it seems every time the size of the input vector is changed there is some large overhead. Would someone know why this is the case and how I can avoid it. It seems to be caused by optimize!(model) and I sould probably note that I am running it in a jupyter notebook.

Thank you very much for your help and time in advance!

using Zygote
using LinearAlgebra
using JuMP
using Ipopt
size_overall = 1000

function cost_func(x…)
dot(x,x)
end

function grad_cost_func(x…)
gradient(tx->cost_func(tx),x)[1]
end

model = Model(Ipopt.Optimizer)

@variable(model,x[1:size_overall])

JuMP.register(model, :f, size_overall, cost_func,grad_cost_func)
@NLobjective(model,Min,f(x…))
optimize!(model)
jump.value.(x)

Could I also add that it takes unreasoble amount of time to run the initial time if the size is above 100. Am I doing something wrong? I had the code before with Matlab and fmincon was spitting out solutions in seconds.

Try letting JuMP figure out the automatic differentiation.

JuMP.register(model, :f, size_overall, cost_func; autodiff = true)

Even better, try writing your problem algebraically instead of a user-defined function

using JuMP
using Ipopt
size_overall = 1000
model = Model(Ipopt.Optimizer)
@variable(model, x[1:size_overall])
@NLobjective(model, Min, sum(x[i]^2 for i in 1:size_overall))
optimize!(model)

The reason is that for the 1000 case, cost_func(x...) builds a function with 1000 arguments. Then Zygote has to try and differentiate that. That’s going to struggle. If you write it out algebraically, JuMP avoids these issues and can have much better performance. (It will also compute second derivatives, which the ussr-defined function approach will not.)

5 Likes

Hi,

Thank you for the input! It did in fact increase the speed dramatically. I will remove the gradient of the cost function, but in general do you have advice on what to do if you can’t write out the cost function analytically? My actual code (this was a toy example) has cost and constraint functions which have vector inputs, which is not supported by JuMp nonlinear optimization. Is there any other way apart from doing it the way I have with cost_func(x…) that will speed up things? Any other code form or pre-compilation techique? I am quite new to the language thus any advice would be very useful, Thank you!

Note that this limitation refers to user-defined functions, not equations (constraints) that you define with @NLconstraint.

If you can provide a more realistic example, people may have suggestions.

1 Like

Both constraints and objective functions accept vector inputs in JuMP. For linear and quadratic cases you can define them as:

@variable(model, myVariable1[1:n])
@variable(model, myVariable2[1:n])
@constraint(model, constraint1, myVariable1 .== myVariable2.^2)
@objective(model, Min, sum(myVariable1))

If your model is nonlinear then you need @NLconstraint and @NLobjective, where arrays are defined slightly different:

@NLconstraint(model, [i = 1:n], myVariable1[i] == myVariable2[i]^3)
@NLobjective(model, Min, exp(sum(myVariable1))

You can find more details here: Nonlinear Modeling · JuMP

1 Like

Thank you, I have seen this beffore but I am still failing to define it in the case where I can’t simply do the operations entry by entry. I have tried this and it didn’t work:

using LinearAlgebra
using JuMP
using Ipopt

size_overall = 100

function cost_func(x...)
    dot(x,x)   
end


model = Model(Ipopt.Optimizer)
@variable(model,x[1:size_overall])
JuMP.register(model, :f, size_overall, cost_func,autodiff = true)
@NLobjective(model,Min,f(x...))


function func_con(x) 
        func_const = x .+ dot(x,x)
    return func_const
end

register(model, :func_con, 1, func_con; autodiff = true) #have tried with size_overall in the place of 1 too
@NLconstraint(model,func_con(x)==0)

@time(optimize!(model))

What I am interested in doing with this toy constraint is finding a way to define my constraints for functions where doing it index by index is not an option as it will require significant rewrite and complications. I want to find a way to pass x as a whole vector into the nonlinear constraint function. Sorry I might not be describing it well…

I will in the future, I am just trying to see what I can achieve with JuMP with toy examples before fully developing my script. Thank you for the help!

Nevermind I got it to work like this:

function func_con(y…)
x=y
func_const = x .+ dot(x,x)
return y
end

register(model, :func_con, size_overall, func_con; autodiff = true)
@NLconstraint(model,[i=1:size_overall],func_con(x...)[i]==0)

But maybe there is a better way?

This problem can be completely solve using JuMP’s modeling. I have not tried this myself, but I think this should work (I am going to change in the constraint your dot product by an exponent because the size of the summation does not match, but the idea is the same):

using JuMP, Ipopt

size_overall = 100

model = Model(Ipopt.Optimizer)
@variable(model, zeros(size_overall) <= x[1:size_overall] <= ones(size_overall) * 100)
@NLconstraint(model, [i = 1:size_overall], 0 == x[i] + exp(x[i]))
@objective(model, Min, x * x')

optimize!(model)
1 Like

Hi, yes for simple examples like this it does work. I was asking what happens if the constraint is not as simple as
0 == x[i] + exp(x[i]) and thus cannot be broken into components easily. Say you have f(x) = 0 where f is user specified and thus not known in advance. How can one call it properly? If you see my comment above (which I will copy below too) I did manage to do it with spatting (f(x…)) but I am just wondering if this is ok and if there is a better way.

function func_con(y…)
x=y
func_const = x .+ dot(x,x) # just for demonstration
return y
end

register(model, :func_con, size_overall, func_con; autodiff = true)
@NLconstraint(model,[i=1:size_overall],func_con(x...)[i]==0)