Optimization using JuMP

I’m trying to code an optimization problem in Julia using JuMP. The objective function has two matrix multiplications.
First, multiply the vector of w with size (10) by the matrix of arr_C with size (20, 10). So the w should be transposed to size (1, 10) to perform matrix multiplication.
Second, multiply the vector of w_each_sim with size (20) by the result of the first multiplication, which is also a vector of size (20). So the multiplication should be like (1x20) x (20x1) to achieve a scalar. Please read until the last line of the question because I applied updates in various ways.
My first try is as follows:

julia> using JuMP, Ipopt

julia> a = rand(20, 10);

julia> b = rand(20); b = b./sum(b)

julia> function port_opt(
n_assets::Int8,
arr_C::Matrix{Float64},
w_each_sim::Vector{Float64})
"""
Calculate weight of each asset through optimization

Parameters
----------
n_assets::Int8 - number of assets
arr_C::Matrix{Float64} - array of C
w_each_sim::Vector{Float64} - weights of each similar TW

Returns
-------
w_opt::Vector{Float64} - weights of each asset
"""

model = Model(Ipopt.Optimizer)
@variable(model, 0<= w[1:n_assets] <=1)
@NLconstraint(model, sum([w[i] for i in 1:n_assets]) == 1)
@NLobjective(model, Max, w_each_sim * log10.([w[i]*arr_C[i] for i in 1:n_assets]))
optimize!(model)

@show value.(w)
return value.(w)
end

julia> port_opt(Int8(10), a, b)
ERROR: UndefVarError: i not defined
Stacktrace:
[1] macro expansion
@ C:\Users\JL\.julia\packages\JuMP\Z1pVn\src\macros.jl:1834 [inlined]
[2] port_opt(n_assets::Int8, arr_C::Matrix{Float64}, w_each_sim::Vector{Float64})
@ Main e:\MyWork\paperbase.jl:237
[3] top-level scope
@ REPL[4]:1


The problem is with the @NLconstraint line.
How can I make this code work and get the optimization done?

I rectified the objective function as follows:

function Obj(w, arr_C, w_each_sim)

first_expr = w'*arr_C'
second_expr = map(first_expr) do x
log10(x)
end

return w_each_sim * second_expr
end

function port_opt(
n_assets::Int8,
arr_C::Matrix{Float64},
w_each_sim::Vector{Float64})

....
....
@NLconstraint(model, sum(w[i] for i in 1:n_assets) == 1)
@NLobjective(model, Max, Obj(w, arr_C, w_each_sim))
optimize!(model)

@show value.(w)
return value.(w)
end

a, b = rand(20, 10), rand(20); b = b./sum(b);
port_opt(Int8(10), a, b)

# Threw this:
ERROR: Unexpected array VariableRef[w[1], w[2], w[3], w[4], w[5], w[6], w[7], w[8], w[9], w[10]] in nonlinear expression. Nonlinear expressions may contain only scalar expressions.


Then I tried this:

function Obj(w, arr_C::Matrix{T}, w_each_sim::Vector{T}) where {T<:Real}

first_expr = zeros(T, length(w_each_sim))
for i∈size(w_each_sim, 1), j∈eachindex(w_each_sim)
first_expr[i] += w[j]*arr_C[i, j]
end

second_expr = map(first_expr) do x
log(x)
end

res = 0
for idx∈eachindex(w_each_sim)
res += w_each_sim[idx]*second_expr[idx]
end

return res
end

function port_opt(
n_assets::Int8,
arr_C::Matrix{Float64},
w_each_sim::Vector{Float64})

model = Model()
@variable(model, 0<= w[1:n_assets] <=1)
@NLconstraint(model, +(w...) == 1)
register(model, :Obj, Int64(n_assets), Obj, autodiff=true)
@NLobjective(model, Max, Obj(w, arr_C, w_each_sim))
optimize!(model)

@show value.(w)
return value.(w)
end

a, b = rand(20, 10), rand(20); b = b./sum(b);
port_opt(Int8(10), a, b)

# threw this
ERROR: Unable to register the function :Obj because it does not support differentiation via ForwardDiff.

Common reasons for this include:

* the function assumes Float64 will be passed as input, it must work for any
generic Real type.
* the function allocates temporary storage using zeros(3) or similar. This
defaults to Float64, so use zeros(T, 3) instead.


And finally, I tried this:

function Obj(the_x...)
w, arr_C, W_each_sim = the_x
first_expr = zeros(length(w_each_sim))
for i∈size(w_each_sim, 1), j∈eachindex(w_each_sim)
first_expr[i] += w[j]*arr_C[i, j]
end

second_expr = map(first_expr) do x
log(x)
end

res = 0
for idx∈eachindex(w_each_sim)
res += w_each_sim[idx]*second_expr[idx]
end

return res
end

function port_opt(
n_assets::Int8,
arr_C::Matrix{Float64},
w_each_sim::Vector{Float64})

model = Model()
@variable(model, 0<= w[1:n_assets] <=1)
register(model, :Obj, Int64(n_assets), Obj, autodiff=true)
the_x = [w, arr_C, w_each_sim]
@NLobjective(model, Max, Obj(the_x...))
optimize!(model)

@show value.(w)
return value.(w)
end
a, b = rand(20, 10), rand(20); b = b./sum(b);
port_opt(Int8(10), a, b)

# Threw this
ERROR: Unexpected array VariableRef[w[1], w[2], w[3], w[4], w[5], w[6], w[7], w[8], w[9], w[10]] in nonlinear expression. Nonlinear expressions may contain only scalar expressions.
Stacktrace:


You need

@NLobjective(model, Max, sum(w_each_sim[i] * log10(w[i]*arr_C[i]) for i in 1:n_assets))


There are a number of limitations in the nonlinear interface to JuMP: Nonlinear Modeling · JuMP

But I’ll see if we can improve the error message.

Unfortunately, this isn’t true; because neither w[i]*arr_C[i] nor w_each_sim[i] * ... aren’t matrix multiplication!
w is a vector with a length of 10, and arr_C is a Matrix with the shape of (20, 10). So, in this case, ((w[i]*arr_C[i]) for i in 1:n_assets) isn’t a matrix multiplication! If you would like to know about the objective function, this function is all you need:

function Obj(w, arr_C::Matrix{T}, w_each_sim::Vector{T}) where {T<:Real}

first_expr = zeros(T, length(w_each_sim))
for i∈size(w_each_sim, 1), j∈eachindex(w_each_sim)
first_expr[i] += w[j]*arr_C[i, j]
end

second_expr = map(first_expr) do x
log(x)
end

res = 0
for idx∈eachindex(w_each_sim)
res += w_each_sim[idx]*second_expr[idx]
end

return res
end


w is the vector of weights, and arr_C is the matrix of relative prices, like \frac{Price_{day \:i}}{Price_{day\:i-1}}. w_each_sim stands for the weight of each similar Time Window that is a vector with length 20.

Thank you for mentioning this. I tried another way using the @expression macro like this:

using JuMP, Ipopt

function port_opt(
n_assets::Int8,
arr_C::Matrix{Float64},
w_each_sim::Vector{Float64})

model = Model(Ipopt.Optimizer)
@variable(model, 0<= w[1:n_assets] <=1)
@constraint(model, sum(w[i] for i in 1:n_assets) == 1)

@expression(model, cw, w'*arr_C')
@expression(model, log_cw, [log(item) for item in cw])
@expression(model, obj_expr, w_each_sim'*log_cw)

@NLobjective(model, Max, obj_expr)
optimize!(model)

@show value.(w)
return value.(w)
end


It threw:

ERROR: log is not defined for type GenericAffExpr. Are you trying to build a nonlinear problem? Make sure you use @NLconstraint/@NLobjective.


Which is an ambiguous error since I already used @NLobjective in the objective definition. Also, I don’t know how I should perform the logarithm.

Unfortunately, this isn’t true

That’ll teach me for replying without reading the question properly or testing the example. I assumed too much

using JuMP, Ipopt
a = rand(20, 10)
b = rand(20); b = b./sum(b)

"""
Calculate weight of each asset through optimization

Parameters
----------
n_assets::Int8 - number of assets
arr_C::Matrix{Float64} - array of C
w_each_sim::Vector{Float64} - weights of each similar TW

Returns
-------
w_opt::Vector{Float64} - weights of each asset
"""
function port_opt(
n_assets::Int8,
arr_C::Matrix{Float64},
w_each_sim::Vector{Float64},
)
model = Model(Ipopt.Optimizer)
@variable(model, 0<= w[1:n_assets] <=1)
@NLconstraint(model, sum(w[i] for i in 1:n_assets) == 1)
@NLobjective(
model,
Max,
sum(
w_each_sim[i] * log10(sum(w[j] * arr_C[i, j] for j in 1:size(arr_C, 2)))
for i in 1:length(w_each_sim)
)
)
optimize!(model)
return value.(w)
end

port_opt(Int8(10), a, b)


Your response is heartwarming to me, no matter the result. So, thank you!

3 Likes

It’s thrown by @expression(model, log_cw, [log(item) for item in cw]). We’re experimenting with something that would fix this: https://github.com/jump-dev/JuMP.jl/pull/3106

1 Like

This seems to be exactly like the function which I referred to:

I’ll investigate it concisely and reach you with the result. But it seems correct to me until now. Thanks. But, I didn’t learn what the problem was and why I should define the objective function expression like that to work. I mean, what is the problem with this syntax:

function Obj(w, arr_C::Matrix{T}, w_each_sim::Vector{T}) where {T<:Real}

first_expr = zeros(T, length(w_each_sim))
for i∈size(w_each_sim, 1), j∈eachindex(w_each_sim)
first_expr[i] += w[j]*arr_C[i, j]
end

second_expr = map(first_expr) do x
log(x)
end

res = 0
for idx∈eachindex(w_each_sim)
res += w_each_sim[idx]*second_expr[idx]
end

return res
end


This function inherently is the same as sum( w_each_sim[i] * log10(sum(w[j] * arr_C[i, j] for j in 1:size(arr_C, 2))) for i in 1:length(w_each_sim)). But I didn’t understand why mine didn’t work. I’m asking this because I want to learn how to define optimization problems in JuMP and what the point is here.

But I didn’t understand why mine didn’t work.

The nonlinear interface to JuMP has a number of syntax limitations. You must write out all expressions, and you cannot use things like map, broadcasting, or linear algebra. See Nonlinear Modeling · JuMP

You can use user-defined functions, but they have a limited scope, and must comply with the syntax: Nonlinear Modeling · JuMP

1 Like

Thank you so much🧡
I’ll read the official doc of your great package to fill my understanding gaps.

1 Like

I’ll read the official doc of your great package to fill my understanding gaps.

Let me know if any bits are not clear after reading the docs. We try hard to make them useful, but we’re always open to suggestions for improvements.

Your attitude toward users is the cherry on the top of your efforts. Sure I will! Thank you for helping and putting in the time and energy.
Sorry, where should I mention my questions? Do you suggest any special framework?

3 Likes

Sorry, where should I mention my questions?

If they are documentation suggestions and you have a GitHub account, post a comment here: Suggestions for documentation improvements · Issue #2348 · jump-dev/JuMP.jl · GitHub

Otherwise, open a new post on this forum.

2 Likes