with k<n, b \in \mathbb{R}^k and z \in \mathbb{R}^n. x[1:k] denotes the first k components of the vector x.
Here is a toy problem on which I am stuck with an error
using JuMP
using Ipopt
model = Model(Ipopt.Optimizer)
z = randn(100);
b = randn(20);
@variable(model, x[1:100])
for i in 1:20
@constraint(model, x[i] >= 0.0)
end
@NLparameter(model, z[i = 1:100] == randn())
@NLparameter(model, b[i = 1:20] == randn())
@NLobjective(model, Min, sum(z[i] * x[i] for i=1:100)^2-log(sum(b[i]*x[i] for i=1:20)) )
@time optimize!(model)
# @show value(x);
# @show value(y);
# @show objective_value(model);
The output of this code :
NOTE: Other linear solvers might be more efficient (see Ipopt documentation).
Number of nonzeros in equality constraint Jacobian...: 0
Number of nonzeros in inequality constraint Jacobian.: 20
Number of nonzeros in Lagrangian Hessian.............: 5050
Number of Iterations....: 0
Number of objective function evaluations = 0
Number of objective gradient evaluations = 1
Number of equality constraint evaluations = 0
Number of inequality constraint evaluations = 1
Number of equality constraint Jacobian evaluations = 0
Number of inequality constraint Jacobian evaluations = 1
Number of Lagrangian Hessian evaluations = 0
Total CPU secs in IPOPT (w/o function evaluations) = 0.002
Total CPU secs in NLP function evaluations = 0.000
EXIT: Invalid number in NLP function or derivative detected.
0.009641 seconds (71.42 k allocations: 6.870 MiB)
I initially try to write the cost function as @NLobjective(model, Min, (value.(z)'*x)^2-log(value.(b)'*x[1:20]) ) but that didn’t work.
First note that you first definitions, are not used, they are replaced by the parameters you define.
z = randn(100);
b = randn(20);
For purposes of optimization the log function needs a strictly positive argument.
In some cases, your parameter: @NLparameter(model, b[i = 1:20] == randn())
is initialized with negative numbers and becausex[i] >= 0.0 there is no possible choice of x that leads to a strictly positive argument for the log.
You need something like: @NLparameter(model, b[i = 1:20] == max(0.00001, randn()))
That won’t solve the issue because the x might be initialized with zeros by default so, you also need: @variable(model, x[1:100], start = 0.00001)
The final thing is:
model = Model(Ipopt.Optimizer)
@variable(model, x[1:100], start = 0.00001)
for i in 1:20
@constraint(model, x[i] >= 0.0)
end
@NLparameter(model, z[i = 1:100] == randn())
@NLparameter(model, b[i = 1:20] == max(randn(), 0.00001))
@NLobjective(model, Min, sum(z[i] * x[i] for i=1:100)^2-log(sum(b[i]*x[i] for i=1:20)) )
@time optimize!(model)
I don’t know exactly how Ipopt works, so, in the future, you might need to enforce positiveness by: @variable(model, x[1:100] >= 1e-8, start = 0.00001)
or: @constraint(model, sum(b[i]*x[i] for i=1:20) >= 1e-8)
one last thing, if you can, change the category of this post to “Optimization (Mathematical)”. It is currently in “Usage”. Many JuMP users and developers are monitoring “Optimization (Mathematical)” , so it might be faster to have questions answered in next times also.
Is there a way to speed-up the optimization ? For x of size 100, the average optimization time is 40.625 ms (computed with @btime). But for a variable x of size 1000, the average optimization time is 14.972 s!!!
Since my cost function is convex, there maybe specific solvers or techniques that I can use?
I have tried with Convex.jl, but the solution found is only close to optimal. But the runtime is largely shrink to 6.915 ms
using Convex
using ECOS, SCS
b = zeros(20)
for i=1:20
b[i] = max(randn(), 0.00001)
end
z = randn(1000);
x = Variable(1000)
problem = minimize(square(dot(z,x)) - log(dot(b,x[1:20])), x[1:20] >=0.0)
solve!(problem, ECOSSolver())
problem.optval