# Error in JuMP with nonlinear objective

Hello everyone,

I am interested in the following optimization problem that I would like to solve with JuMP:

\begin{split} \min_{x \in \mathbb{R}^n} & (z^Tx)^2 - \log(b^T x[1:k])\\ \mbox{ subject to }& x[1:k] \geq 0 \end{split}

with k<n, b \in \mathbb{R}^k and z \in \mathbb{R}^n. x[1:k] denotes the first k components of the vector x.

Here is a toy problem on which I am stuck with an error

using JuMP
using Ipopt

model = Model(Ipopt.Optimizer)

z = randn(100);
b = randn(20);

@variable(model, x[1:100])
for i in 1:20
@constraint(model, x[i] >= 0.0)
end
@NLparameter(model, z[i = 1:100] == randn())
@NLparameter(model, b[i = 1:20] == randn())
@NLobjective(model, Min, sum(z[i] * x[i] for i=1:100)^2-log(sum(b[i]*x[i] for i=1:20)) )

@time optimize!(model)

# @show value(x);
# @show value(y);
# @show objective_value(model);


The output of this code :

NOTE: Other linear solvers might be more efficient (see Ipopt documentation).

Number of nonzeros in equality constraint Jacobian...:        0
Number of nonzeros in inequality constraint Jacobian.:       20
Number of nonzeros in Lagrangian Hessian.............:     5050

Number of Iterations....: 0

Number of objective function evaluations             = 0
Number of objective gradient evaluations             = 1
Number of equality constraint evaluations            = 0
Number of inequality constraint evaluations          = 1
Number of equality constraint Jacobian evaluations   = 0
Number of inequality constraint Jacobian evaluations = 1
Number of Lagrangian Hessian evaluations             = 0
Total CPU secs in IPOPT (w/o function evaluations)   =      0.002
Total CPU secs in NLP function evaluations           =      0.000

EXIT: Invalid number in NLP function or derivative detected.
0.009641 seconds (71.42 k allocations: 6.870 MiB)


I initially try to write the cost function as
@NLobjective(model, Min, (value.(z)'*x)^2-log(value.(b)'*x[1:20]) ) but that didn’t work.

First note that you first definitions, are not used, they are replaced by the parameters you define.

z = randn(100);
b = randn(20);


For purposes of optimization the log function needs a strictly positive argument.
@NLparameter(model, b[i = 1:20] == randn())
is initialized with negative numbers and becausex[i] >= 0.0 there is no possible choice of x that leads to a strictly positive argument for the log.

You need something like:
@NLparameter(model, b[i = 1:20] == max(0.00001, randn()))

That won’t solve the issue because the x might be initialized with zeros by default so, you also need:
@variable(model, x[1:100], start = 0.00001)

The final thing is:

model = Model(Ipopt.Optimizer)

@variable(model, x[1:100], start = 0.00001)
for i in 1:20
@constraint(model, x[i] >= 0.0)
end
@NLparameter(model, z[i = 1:100] == randn())
@NLparameter(model, b[i = 1:20] == max(randn(), 0.00001))
@NLobjective(model, Min, sum(z[i] * x[i] for i=1:100)^2-log(sum(b[i]*x[i] for i=1:20)) )

@time optimize!(model)


I don’t know exactly how Ipopt works, so, in the future, you might need to enforce positiveness by:
@variable(model, x[1:100] >= 1e-8, start = 0.00001)
or:
@constraint(model, sum(b[i]*x[i] for i=1:20) >= 1e-8)

2 Likes

Thank you so much for your help!!! one last thing, if you can, change the category of this post to “Optimization (Mathematical)”. It is currently in “Usage”. Many JuMP users and developers are monitoring “Optimization (Mathematical)” , so it might be faster to have questions answered in next times also.

Apparently I can move it, so I did 2 Likes

Hello,

Is there a way to speed-up the optimization ? For x of size 100, the average optimization time is 40.625 ms (computed with @btime). But for a variable x of size 1000, the average optimization time is 14.972 s!!!
Since my cost function is convex, there maybe specific solvers or techniques that I can use?

I have tried with Convex.jl, but the solution found is only close to optimal. But the runtime is largely shrink to 6.915 ms

using Convex
using ECOS, SCS
b = zeros(20)
for i=1:20
b[i] = max(randn(), 0.00001)
end
z = randn(1000);

x = Variable(1000)
problem = minimize(square(dot(z,x)) - log(dot(b,x[1:20])), x[1:20] >=0.0)
solve!(problem, ECOSSolver())
problem.optval


The output is


ECOS 2.0.5 - (C) embotech GmbH, Zurich Switzerland, 2012-15. Web: www.embotech.com/ECOS

It     pcost       dcost      gap   pres   dres    k/t    mu     step   sigma     IR    |   BT
0  +0.000e+00  -1.487e+00  +2e+01  8e-01  1e+00  1e+00  1e+00    ---    ---    0  0  - |  -  -
1  -1.033e+00  -1.835e+00  +1e+01  4e-01  9e-01  9e-01  5e-01  0.6266  2e-01   1  1  1 |  2  2
2  -2.238e+00  -2.280e+00  +3e+00  1e-01  5e-01  6e-01  1e-01  0.7833  3e-02   1  1  1 |  1  1
3  -2.783e+00  -2.623e+00  +8e-01  5e-02  2e-01  4e-01  3e-02  0.7289  3e-03   1  1  1 |  0  1
4  -3.307e+00  -3.097e+00  +3e-01  3e-02  2e-01  4e-01  1e-02  0.6266  2e-02   1  1  1 |  1  2
5  -4.170e+00  -3.831e+00  +7e-02  1e-02  1e-01  4e-01  3e-03  0.7759  2e-03   2  1  1 |  0  1
6  -4.691e+00  -4.194e+00  +4e-02  1e-02  1e-01  6e-01  1e-03  0.6266  3e-01   2  1  1 |  4  2
7  -4.513e+00  -4.128e+00  +3e-02  1e-02  1e-01  5e-01  1e-03  0.9791  9e-01   2  1  1 | 16  0
8  -5.276e+00  -4.721e+00  +1e-02  1e-02  1e-01  6e-01  6e-04  0.6266  1e-01   1  1  1 |  3  2
9  -5.638e+00  -5.131e+00  +6e-03  6e-03  7e-02  5e-01  2e-04  0.9791  4e-01   1  1  1 |  6  0
10  -6.184e+00  -5.679e+00  +2e-03  4e-03  4e-02  5e-01  9e-05  0.7833  2e-01   2  1  1 |  4  1
11  -6.447e+00  -6.050e+00  +7e-04  2e-03  2e-02  4e-01  3e-05  0.9791  3e-01   2  1  1 |  5  0
12  -7.038e+00  -6.608e+00  +3e-04  1e-03  2e-02  4e-01  1e-05  0.6266  5e-02   1  1  1 |  2  2
13  -7.802e+00  -7.327e+00  +8e-05  7e-04  9e-03  5e-01  3e-06  0.7833  6e-02   2  1  1 |  2  1
14  -8.040e+00  -7.625e+00  +3e-05  4e-04  5e-03  4e-01  1e-06  0.9791  3e-01   1  1  1 |  5  0
15  -8.615e+00  -8.161e+00  +1e-05  3e-04  4e-03  5e-01  5e-07  0.6266  1e-01   1  1  1 |  3  2
16  -9.276e+00  -8.798e+00  +4e-06  2e-04  2e-03  5e-01  2e-07  0.7833  1e-01   1  1  1 |  3  1
17  -9.523e+00  -9.125e+00  +1e-06  9e-05  1e-03  4e-01  5e-08  0.9791  3e-01   1  1  1 |  5  0
18  -1.010e+01  -9.672e+00  +6e-07  6e-05  8e-04  4e-01  2e-08  0.6266  1e-01   1  1  1 |  3  2
19  -1.063e+01  -1.017e+01  +2e-07  4e-05  5e-04  5e-01  9e-09  0.6266  5e-02   1  1  1 |  2  2
20  -1.122e+01  -1.076e+01  +7e-08  2e-05  3e-04  5e-01  3e-09  0.7833  1e-01   2  1  1 |  3  1
21  -1.180e+01  -1.132e+01  +2e-08  1e-05  2e-04  5e-01  9e-10  0.7833  1e-01   1  1  1 |  3  1
22  -1.222e+01  -1.181e+01  +7e-09  7e-06  8e-05  4e-01  3e-10  0.9791  3e-01   1  1  1 |  5  0
23  -1.289e+01  -1.239e+01  +3e-09  5e-06  5e-05  5e-01  1e-10  0.6266  5e-02   2  1  1 |  2  2
24  -1.307e+01  -1.268e+01  +1e-09  3e-06  3e-05  4e-01  4e-11  0.9791  4e-01   2  1  1 |  6  0
25  -1.382e+01  -1.338e+01  +4e-10  2e-06  2e-05  4e-01  2e-11  0.6266  5e-02   1  1  1 |  2  2
26  -1.452e+01  -1.396e+01  +1e-10  1e-06  1e-05  6e-01  4e-12  0.7833  5e-02   1  1  1 |  2  1
27  -1.422e+01  -1.390e+01  +8e-11  1e-06  9e-06  3e-01  3e-12  0.9791  8e-01   1  1  1 | 13  0
28  -1.494e+01  -1.458e+01  +3e-11  7e-07  6e-06  4e-01  1e-12  0.6266  5e-02   1  1  1 |  2  2
29  -1.573e+01  -1.523e+01  +7e-12  6e-07  4e-06  5e-01  3e-13  0.7833  9e-03   1  1  1 |  1  1
30  -1.627e+01  -1.579e+01  +3e-12  5e-07  3e-06  5e-01  1e-13  0.9791  4e-01   1  1  1 |  6  0
31  -1.679e+01  -1.627e+01  +1e-12  5e-07  3e-06  5e-01  4e-14  0.7833  2e-01   1  0  1 |  4  1
32  -1.682e+01  -1.634e+01  +1e-12  4e-07  3e-06  5e-01  5e-14  0.2567  4e-01   0  0  0 |  3  6
33  -1.684e+01  -1.637e+01  +1e-12  4e-07  3e-06  5e-01  5e-14  0.0673  2e-02   1  0  0 |  0 12
34  -1.684e+01  -1.637e+01  +1e-12  4e-07  3e-06  5e-01  5e-14  0.0072  3e-02   0  0  0 |  0 22
35  -1.684e+01  -1.637e+01  +1e-12  4e-07  3e-06  5e-01  5e-14  0.0006  3e-02   1  0  0 |  0 33
36  -1.684e+01  -1.637e+01  +1e-12  4e-07  3e-06  5e-01  5e-14  0.0001  3e-02   1  0  0 |  0 40
37  -1.684e+01  -1.637e+01  +1e-12  4e-07  3e-06  5e-01  5e-14  0.0000  9e-02   1  0  0 |  1 51
38  -1.684e+01  -1.637e+01  +1e-12  4e-07  3e-06  5e-01  5e-14  0.0000  9e-02   1  0  0 |  1 58
39  -1.684e+01  -1.637e+01  +1e-12  4e-07  3e-06  5e-01  5e-14  0.0000  3e-02   0  0  0 |  0 70
40  -1.684e+01  -1.637e+01  +1e-12  4e-07  3e-06  5e-01  5e-14  0.0000  3e-02   1  0  0 |  0 77
41  -1.684e+01  -1.637e+01  +1e-12  4e-07  3e-06  5e-01  5e-14  0.0000  3e-02   1  0  0 |  0 84
Combined backtracking failed 90 0 0 0 sigma 0.0915438
Combined line search failed, recovering best iterate (27) and stopping.

Close to OPTIMAL (within feastol=8.7e-06, reltol=5.7e-12, abstol=8.1e-11).
Runtime: 0.006915 seconds.

┌ Warning: Problem status Suboptimal; solution may be inaccurate.
└ @ Convex /home/mat/.julia/packages/Convex/NBqGs/src/solution.jl:51

-14.223704019591132


@variable(model, x[1:100], start = 0.00001)
for i in 1:20
@constraint(model, x[i] >= 0.0)
end


use

@variable(model, x[1:100], start = 0.00001)
for i=1:20
set_lower_bound(x[i], 0.0)
end

1 Like