# JuMP error for nonlinear optimization

I’m trying to set up a nonlinear optimization problem and I’m getting an error I don’t understand…

``````import JuMP as J
import Ipopt

function smoothabs(x) # Smooth approximation to abs(x)
k = 1000
f = 2/k
f*log(1+exp(k*x)) - x - f*log(2)
end
smoothmin2(x,y) = (x + y - smoothabs(x-y))/2 # Smooth approximation to min(x,y)

# nc and nb are integers, both about 200 to 300
# A is a Float64 matrix of ones and zeros of dimension [nc, nb]
# abw is a Float64 approximately equal to 2000.0

model = J.Model(Ipopt.Optimizer)
J.register(model, :smoothmin, 2, smoothmin2, autodiff=true)
J.@variable(model, 0 <= cbw[i=1:nc])
J.@NLconstraint(model, con1,  sum(c for c in cbw) ≤ abw)
J.@NLobjective(model, Max, smoothmin(sum(A[1,n] * cbw[n] for n in 1:nc), dmndvec[1]))
J.@NLobjective(model, Max, sum(s for s in (smoothmin2(sum(A[m,n]*cbw[n] for n in 1:nc), dmndvec[m]) for m in 1:nb)))
@time J.optimize!(model)
``````

The problem is nonlinear because I am using the function `smoothmin2` to (approximately) choose the smaller of two arguments, while remaining differentiable so I can still use `JuMP`.

This produces the following error message:

``````exp is not defined for type GenericAffExpr. Are you trying to build a nonlinear problem? Make sure you use @NLconstraint/@NLobjective.
``````

My constraints and objective are all specified using `@NL...`, so why am I getting this error message?

Thanks in advance for any help.

Your second objective calls `smoothmin2`, not the registered function `smoothmin`.

Your expressions are all algebraic, so you should try writing it out without using user-defined functions if possible. If the model contains user-defined functions, we disable second-derivative information, so. the performance is worse.

Thanks, I guess I don’t understand the `register` function. I thought that if I register the symbol `:smoothmin` to refer to the Julia function `smoothmin2` then I could use `smoothmin` within the `NLobjective` call to refer to the `smoothmin2` function. Is that incorrect?

Is that incorrect?

Yes, but you have

``````or s in (smoothmin2(sum(A[m,n]*
``````

Ah, thanks! Guess I’ve been staring at it too long. Now, after correcting (and removing one of the constraints):

``````model = J.Model(Ipopt.Optimizer)
J.register(model, :smoothmin, 2, smoothmin2, autodiff=true)
J.@variable(model, 0 <= cbw[i=1:nc])
J.@NLobjective(model, Max, sum(s for s in (smoothmin(sum(A[m,n]*cbw[n] for n in 1:nc), dmndvec[m]) for m in 1:nb)))
@time J.optimize!(model)
``````

I get this error:

``````UndefVarError: smoothmin not defined
``````

I tried restarting Julia to no avail. I then tried using the symbol `:smoothmin2` to represent the function `smoothmin2`:

``````model = J.Model(Ipopt.Optimizer)
J.register(model, :smoothmin2, 2, smoothmin2, autodiff=true)
J.@variable(model, 0 <= cbw[i=1:nc])
J.@NLobjective(model, Max, sum(s for s in (smoothmin2(sum(A[m,n]*cbw[n] for n in 1:nc), dmndvec[m]) for m in 1:nb)))
@time J.optimize!(model)
``````

and I get the error:

``````exp is not defined for type GenericAffExpr. Are you trying to build a nonlinear problem? Make sure you use @NLconstraint/@NLobjective.
``````

Any idea what I’m doing wrong?

I think you mean:

``````sum(smoothmin(sum(A[m,n]*cbw[n] for n in 1:nc), dmndvec[m]) for m in 1:nb))
``````

The `sum(s for s in X)` causes us to try and evaluate `S` as a constant, not part of the decision variables. This doesn’t go through the function registration process.

Thanks! That allowed the optimizer to begin. After reinserting the necessary constraint Ipopt terminates with this error:

``````******************************************************************************
This program contains Ipopt, a library for large-scale nonlinear optimization.
Ipopt is released as open source code under the Eclipse Public License (EPL).
******************************************************************************

This is Ipopt version 3.13.4, running with linear solver mumps.
NOTE: Other linear solvers might be more efficient (see Ipopt documentation).

Number of nonzeros in equality constraint Jacobian...:        0
Number of nonzeros in inequality constraint Jacobian.:      354
Number of nonzeros in Lagrangian Hessian.............:        0

Total number of variables............................:      354
variables with only lower bounds:      354
variables with lower and upper bounds:        0
variables with only upper bounds:        0
Total number of equality constraints.................:        0
Total number of inequality constraints...............:        1
inequality constraints with only lower bounds:        0
inequality constraints with lower and upper bounds:        0
inequality constraints with only upper bounds:        1

iter    objective    inf_pr   inf_du lg(mu)  ||d||  lg(rg) alpha_du alpha_pr  ls
0  1.6614353e+01 0.00e+00 1.74e+01   0.0 0.00e+00    -  0.00e+00 0.00e+00   0
Warning: Cutting back alpha due to evaluation error
1  6.2761881e+03 0.00e+00 1.62e+01   0.9 2.68e+03    -  6.94e-02 4.98e-01f  2
Warning: Cutting back alpha due to evaluation error
Warning: Cutting back alpha due to evaluation error
Warning: Cutting back alpha due to evaluation error
2  7.9405496e+03 0.00e+00 1.70e+01   0.6 1.38e+03    -  4.48e-01 1.22e-01f  4
Warning: Cutting back alpha due to evaluation error
Warning: Cutting back alpha due to evaluation error
Warning: Cutting back alpha due to evaluation error
Warning: Cutting back alpha due to evaluation error
3  8.7542582e+03 0.00e+00 1.68e+01  -5.2 1.20e+03    -  2.48e-01 6.16e-02f  5
Warning: Cutting back alpha due to evaluation error
Warning: Cutting back alpha due to evaluation error
Warning: Cutting back alpha due to evaluation error
Warning: Cutting back alpha due to evaluation error
Warning: Cutting back alpha due to evaluation error
Warning: Cutting back alpha due to evaluation error
Warning: Cutting back alpha due to evaluation error
4  8.8470353e+03 0.00e+00 1.73e+01   0.0 1.28e+03    -  8.83e-01 6.78e-03f  8
Warning: Cutting back alpha due to evaluation error
Warning: Cutting back alpha due to evaluation error
Warning: Cutting back alpha due to evaluation error
Warning: Cutting back alpha due to evaluation error
Warning: Cutting back alpha due to evaluation error
Warning: Cutting back alpha due to evaluation error
Warning: Cutting back alpha due to evaluation error
Warning: Cutting back alpha due to evaluation error
5  8.8912916e+03 0.00e+00 1.71e+01   0.3 1.31e+03    -  6.77e-01 3.28e-03f  9
Warning: Cutting back alpha due to evaluation error
Warning: Cutting back alpha due to evaluation error
Warning: Cutting back alpha due to evaluation error
Warning: Cutting back alpha due to evaluation error
Warning: Cutting back alpha due to evaluation error
Warning: Cutting back alpha due to evaluation error
Warning: Cutting back alpha due to evaluation error
Warning: Cutting back alpha due to evaluation error
6  8.9322177e+03 0.00e+00 1.73e+01   0.2 1.38e+03    -  8.40e-01 3.11e-03f  9
Warning: Cutting back alpha due to evaluation error
Warning: Cutting back alpha due to evaluation error
Warning: Cutting back alpha due to evaluation error
Warning: Cutting back alpha due to evaluation error
Warning: Cutting back alpha due to evaluation error
Warning: Cutting back alpha due to evaluation error
Warning: Cutting back alpha due to evaluation error
Warning: Cutting back alpha due to evaluation error
7  8.9699505e+03 0.00e+00 1.71e+01   0.3 1.19e+03    -  3.86e-01 3.57e-03f  9
Warning: Cutting back alpha due to evaluation error
Warning: Cutting back alpha due to evaluation error
Warning: Cutting back alpha due to evaluation error
Warning: Cutting back alpha due to evaluation error
Warning: Cutting back alpha due to evaluation error
Warning: Cutting back alpha due to evaluation error
Warning: Cutting back alpha due to evaluation error
Warning: Cutting back alpha due to evaluation error
Warning: Cutting back alpha due to evaluation error
Warning: Cutting back alpha due to evaluation error

Number of Iterations....: 7

Number of objective function evaluations             = 58
Number of objective gradient evaluations             = 9
Number of equality constraint evaluations            = 0
Number of inequality constraint evaluations          = 58
Number of equality constraint Jacobian evaluations   = 0
Number of inequality constraint Jacobian evaluations = 9
Number of Lagrangian Hessian evaluations             = 0
Total CPU secs in IPOPT (w/o function evaluations)   =      1.386
Total CPU secs in NLP function evaluations           =      0.190

EXIT: Invalid number in NLP function or derivative detected.
14.603243 seconds (74.75 M allocations: 4.373 GiB, 5.11% gc time, 1.15% compilation time)
``````

Is this error related to the fact that second derivatives are not available? I would like to follow your suggestion and eliminate the registered function, but I don’t see how to do that. I apologize for leaning on your help so heavily.

``````using JuMP
import Ipopt
nc, nb = 200, 300
A = rand(0:1, nb, nc)
abw = 2.0
dmndvec = rand(nb)
model = Model(Ipopt.Optimizer)
@variable(model, cbw[1:nc] >= 0)
@constraint(model, sum(cbw) <= 2.0)
@expression(model, Ax[m=1:nb], sum(A[m, n] * cbw[n] for n in 1:nc))
function smooth_min(model, x, y)
k = 1_000
f = 2 / k
return @NLexpression(
model,
(x + y - f * log(1 + exp(k * (x - y))) - (x - y) - f * log(2)) / 2,
)
end
nl_expr = smooth_min.(model, Ax, dmndvec)
@NLobjective(model, Max, sum(nl_expr[m] for m in 1:nb))
optimize!(model)
``````

but…

`exp` is not defined in `Float64` arithmetic for values >709.

``````julia> exp(709.0)
8.218407461554972e307

julia> exp(710.0)
Inf
``````

Your model has `1000 * (x - y)`, so this approximation is only valid for `x` close to `y`. If `x > y + 1`, then Ipopt will evaluate an `Inf`.

I suggest you reconsider your modeling approximation.

1 Like

Wow! I had no idea that one could return a `@NLexpression` from a function, nor that one could generate a vector of same in such a fashion. It’s beautifully concise. I will experiment with this and work on a better smooth approximation to the `min` function, then report back before marking your answer as the solution. Again, thanks for the detailed response.

the solution here is to use LogExpFunctions · LogExpFunctions.jl it computes `log(1+exp(x))` without overflow.

1 Like

Thanks. This approximation also seems to work for my application:

``````smooth_min(model, x, y) = @NLexpression(model, (x + y - sqrt((x-y)^2 + 1e-8))/2)
``````

Note (in case you need it): you can reformulate your problem into a smooth problem (without max or abs) as follows:
if the objective function contains |x|, replace it with p+n, where p and n are additional variables, and add the following constraints:
x = p-n
p, n \ge 0
Ta-da! You now have a smooth problem.

1 Like

Great trick! I’m sure there must be lots of other tricks for formulating optimizations. Are they listed somewhere?

As a final followup, the code is now working well, thanks to your extensive help. For the record, I did have to change `@expression(model, Ax[m=1:nb], sum(A[m, n] * cbw[n] for n in 1:nc))`
to `@NLexpression(model, Ax[m=1:nb], sum(A[m, n] * cbw[n] for n in 1:nc))`
to get JuMP to accept it, I believe because it is used in the call to `smooth_min`.

Edit: As pointed out in a later post in this thread, recent versions of JuMP do not require this.

I’m sure there must be lots of other tricks for formulating optimizations. Are they listed somewhere?

The Mosek Modeling Cookbook has a bunch: MOSEK Modeling Cookbook — MOSEK Modeling Cookbook 3.2.3. But optimization is more of an art than a science.

For the record, I did have to change

Update your version of JuMP. This should be allowed in any version of JuMP 0.21.8 or later.

Thanks for lead on the tips, I’m going to enjoy reading them!

`Pkg.status()` says my version of JuMP is 0.21.10.

Do you have a reproducible example? Using `@expression` works fine for me.

``````julia> using JuMP

julia> import Ipopt

julia> nc, nb = 200, 300;

julia> A = rand(0:1, nb, nc);

julia> abw = 2.0;

julia> dmndvec = rand(nb);

julia> model = Model(Ipopt.Optimizer);

julia> @variable(model, cbw[1:nc] >= 0);

julia> @constraint(model, sum(cbw) <= 2.0);

julia> @expression(model, Ax[m=1:nb], sum(A[m, n] * cbw[n] for n in 1:nc));

julia> smooth_min(model, x, y) = @NLexpression(model, (x + y - sqrt((x-y)^2 + 1e-8))/2);

julia> nl_expr = smooth_min.(model, Ax, dmndvec);

julia> @NLobjective(model, Max, sum(nl_expr[m] for m in 1:nb));

julia> optimize!(model)
This is Ipopt version 3.13.4, running with linear solver mumps.
NOTE: Other linear solvers might be more efficient (see Ipopt documentation).

Number of nonzeros in equality constraint Jacobian...:        0
Number of nonzeros in inequality constraint Jacobian.:      200
Number of nonzeros in Lagrangian Hessian.............:    20100

Total number of variables............................:      200
variables with only lower bounds:      200
variables with lower and upper bounds:        0
variables with only upper bounds:        0
Total number of equality constraints.................:        0
Total number of inequality constraints...............:        1
inequality constraints with only lower bounds:        0
inequality constraints with lower and upper bounds:        0
inequality constraints with only upper bounds:        1

iter    objective    inf_pr   inf_du lg(mu)  ||d||  lg(rg) alpha_du alpha_pr  ls
0  1.4728209e+02 0.00e+00 2.81e+00  -1.0 0.00e+00    -  0.00e+00 0.00e+00   0
[...]
66  1.4791869e+02 0.00e+00 3.06e-09  -8.6 9.28e-04    -  1.00e+00 1.00e+00h  1

Number of Iterations....: 66

(scaled)                 (unscaled)
Objective...............:  -8.6502173739675257e+01    1.4791868520649248e+02
Dual infeasibility......:   3.0603609204815344e-09    5.2332160458452213e-09
Constraint violation....:   0.0000000000000000e+00    0.0000000000000000e+00
Complementarity.........:   2.5059036087747929e-09   -4.2850942472230598e-09
Overall NLP error.......:   3.0603609204815344e-09    5.2332160458452213e-09

Number of objective function evaluations             = 191
Number of objective gradient evaluations             = 67
Number of equality constraint evaluations            = 0
Number of inequality constraint evaluations          = 191
Number of equality constraint Jacobian evaluations   = 0
Number of inequality constraint Jacobian evaluations = 1
Number of Lagrangian Hessian evaluations             = 66
Total CPU secs in IPOPT (w/o function evaluations)   =      0.479
Total CPU secs in NLP function evaluations           =      9.020

EXIT: Optimal Solution Found.

(ipopt) pkg> st
Status `/private/tmp/ipopt/Project.toml`
[b6b21f68] Ipopt v0.7.0
[4076af6c] JuMP v0.21.10
``````

It works for me now as well. I think that while I was experimenting with installing/uninstalling different solvers, JuMP must have been updated and I didn’t notice it. I’ve edited my previous post to show that the change to `@NLexpression` is not needed.

1 Like