Optimizing an array of variables using JuMP (NLP)

#1

Hi all,

I’m trying to formulate a nonlinear optimization problem with JuMP. Specifically, I’m looking to minimize the sum of the products of certain values in a matrix where one matrix is variable, and two additional matrices are constants. My question is how should I be handling an optimization problem where my variables are a 2D or 3D matrix of floats.

The objective function can be expressed as:

min \sum_{i=1}^n \alpha_i \prod_{s=1}^S exp(-w_{i,s}*x_{i,s})

I’ve run in to a number of problems trying to formulate this properly, since JuMP has restrictions on handling arrays in user defined functions, but here is my latest attempt:

using JuMP, Ipopt

function main(row, col)

w = rand(row, col)
α = rand(row)

m = Model(solver = IpoptSolver())

@variable(m, x[1:row*col] >= 0)

function myf(a...)
running_sum = 0
for i in length(α)
sub_product = 1
for s in col
sub_product *= exp(-w[i, s] * a[i + (s-1)*row])
end
running_sum += α[i] * sub_product
end
return running_sum
end

JuMP.register(m, :myf, row*col, myf, autodiff=true)
obj_expr = Expr(:call, :myf, [x[i] for i=1:row*col]...)
@NLobjective(m, :Min, obj_expr)
solve(m)
return m
end

println(main(4,2))


Which gives the error:

ERROR: LoadError: MethodError: no method matching parseNLExpr_runtime(::JuMP.Model, ::Expr, ::Array{ReverseDiffSparse.NodeData,1}, ::Int64, ::Array{Float64,1})
Closest candidates are:
parseNLExpr_runtime(::JuMP.Model, !Matched::Number, ::Any, ::Any, ::Any) at /Users/u/.julia/v0.6/JuMP/src/parsenlp.jl:196
parseNLExpr_runtime(::JuMP.Model, !Matched::JuMP.Variable, ::Any, ::Any, ::Any) at /Users/u/.julia/v0.6/JuMP/src/parsenlp.jl:202
parseNLExpr_runtime(::JuMP.Model, !Matched::JuMP.NonlinearExpression, ::Any, ::Any, ::Any) at /Users/u/.julia/v0.6/JuMP/src/parsenlp.jl:208
...
Stacktrace:
[1] macro expansion at /Users/u/.julia/v0.6/JuMP/src/parsenlp.jl:226 [inlined]
[2] macro expansion at /Users/u/.julia/v0.6/JuMP/src/macros.jl:1157 [inlined]
[3] main(::Int64, ::Int64) at /Users/u/Documents/test2.jl:26
[6] process_options(::Base.JLOptions) at ./client.jl:305
[7] _start() at ./client.jl:371


I believe my error is in how I define and pass the x variables, but I’m not quite sure why. I initially tried to represent x using x[1:row, 1:col], but seeing as the splatting ... syntax reduces a multi-dimensional array to a single dimension, I just defined it here as 1D. I have the freedom to reformat the inputs like this, but in practice the dimensions will be much larger than 4x2, probably on the order of 100x100.

I’m new to JuMP and Julia, so I’d appreciate any suggestions on how to better format this problem, especially since I plan incorporate additional constants, meaning that x will soon represent a 3-dimensional matrix (where it is currently 2-dimensional).

@NLconstraint not working with splatting syntax
#2

The underlying issue you’re hitting here is that JuMP does not yet have the ability to handle matrix input for Non Linear problems. Your problem requires an exponential, making myf a non-linear constraint, thus your use of the splat will not be possible.

Taking the sum outside of the function is what you’ll need to do.
I don’t think my code below solves your problem, since I remove the correct indexing in your a matrix, but hopefully it guides you to a correct solution:

using JuMP, Ipopt

function main(row, col)

w = rand(row, col)
α = rand(row)

m = Model(solver = IpoptSolver())

@variable(m, x[1:row*col] >= 0)

function myf(a) #Only a single value as input
running_sum = 0
for i in length(α)
sub_product = 1
for s in col
sub_product *= exp(-w[i, s] * a) #Check this simplification
end
running_sum += α[i] * sub_product
end
return running_sum
end

JuMP.register(m, :myf, 1, myf, autodiff=true) #One variable as input
@NLobjective(m, Min, sum(myf(x[i]) for i=1:row*col))
# More explicitly, this is setting a constraint like this:
#@variable(m, obj);
#@NLconstraint(m, obj == sum(myf(x[i]) for i=1:row*col))
#@NLobjective(m, Min, obj)
solve(m)
return m
end

println(main(4,2))


#3

In addition to the syntax limitations, there are documented performance limitations for using autodiff=true:

Forward-mode automatic differentiation as implemented by ForwardDiff.jl has a computational cost that scales linearly with the number of input dimensions. As such, it is not the most efficient way to compute gradients of user-defined functions if the number of input arguments is large.

Given that the function you want to optimize is a already written in a nice closed algebraic form, I would highly recommend using JuMP’s built-in syntax rather than used-defined functions, e.g.,

@NLobjective(model, Min, sum( α[i]*prod(exp(-w[i,s]*x[i,s]) for s in 1:S) for i in 1:n)


#4

Thanks so much for the responses. Miles, this is exactly what I was looking for. I had tried something like this previously, but I had been getting an error like:

ERROR: unknown function "exp" in nonlinear expression


…which is what drove me to try the user defined function approach, but this solution works perfectly.