JuMP Non-Linear Optimization

I am trying to optimize the below equation, but it throws out an error.

sqrt(transpose(x) * [0.00076204 0.00051972; 0.00051972 0.00546173] * x)
model = Model(HiGHS.Optimizer)

@variable(model, x[1:2])
@NLobjective(model, Min, sqrt(transpose(x) * [0.00076204 0.00051972; 0.00051972 0.00546173] * x))

@NLobjective(…) is throwing the below error.

ERROR: Unexpected array VariableRef[x[1], x[2]] in nonlinear expression. Nonlinear expressions may contain only scalar expressions.

I set the x[1:2], but potentially it may be more than 2 variables. I used a different solver (Ipopt) but got the same error.

Would you mind helping me with this issue?

I appreciate any help you can provide.

1 Like

See this part of the documentation Nonlinear Modeling · JuMP

You can either construct the quadratic term independently:

using JuMP, Ipopt
model = Model(Ipopt.Optimizer)
@variable(model, x[1:2])
A = [0.00076204 0.00051972; 0.00051972 0.00546173]
@expression(model, quad_expr, x' * A * x)
@NLobjective(model, Min, sqrt(quad_expr))

Or write out the scalarized summation:

using JuMP, Ipopt
model = Model(Ipopt.Optimizer)
A = [0.00076204 0.00051972; 0.00051972 0.00546173]
N = size(A, 1)
@variable(model, x[1:N])
@NLobjective(model, Min, sqrt(sum(x[i] * A[i, j] * x[j] for i in 1:N, j in 1:N)))
2 Likes

@edow
Thank you so much for your very prompt answer. That works perfectly - I was trying to solve it for the last 5 hours :). Just have one more question, my apologies for writing in installments. I want to add a constraint that the sum of elements of x will equal 1. Individual elements will be greater than 0 and less than 1

@NLconstraint(model, sum(x) = 1)
@NLconstraint(model, 0 <= x <= 1)

I hope it makes sense. Is there any way to implement this too?

Thank you so much for your attention and participation.
Best regards
Mehmet

1 Like

Here you go:

using JuMP, Ipopt
model = Model(Ipopt.Optimizer)
@variable(model, 0 <= x[1:2] <= 1)
@constraint(model, sum(x) == 1)
A = [0.00076204 0.00051972; 0.00051972 0.00546173]
@expression(model, quad_expr, x' * A * x)
@NLobjective(model, Min, sqrt(quad_expr))
2 Likes

Thanks very much. Works like a charm!

1 Like

@odow,
Thanks for helping me with the optimization problem earlier (Feb 3). Indeed thank you for JuMP in the very first place. It’s a great package.

JuMP.jl is a dependency for PortoflioAnalytics.jl which I am actively developing. I will soon make a second release and want to improve the portfolio optimization function in it and wondering if you could make a sanity check for the below function.

Function either minimizes the portfolio variance or maximizes the Sharpe ratio. I want users to define (if they want) a target portfolio mean return so that they can move across the efficient frontier.

Sorry for the ugly pictures!

The output looks reasonable, but I am unsure if it is doing what I think it is.

I am placing the code here - I hope it will be easy to reproduce.

using JuMP
using Ipopt
using Statistics
using TSFrames
using NamedArrays

bond = [0.06276629, 0.03958098, 0.08456482,0.02759821,0.09584956,0.06363253,0.02874502,0.02707264,0.08776449,0.02950032]
stock = [0.1759782,0.20386651,0.21993588,0.3090001,0.17365969,0.10465274,0.07888138,0.13220847,0.28409742,0.14343067]

R = TSFrame([bond stock], colnames  = [:bond, :stock])


function PortfolioOptimize(R::TSFrame, objective::String = "minumum variance"; target = Nothing, Rf::Number = 0)
    

    means = mean.(eachcol(Matrix(R)))
    covMatrix = cov(Matrix(R))
    
    model = Model(Ipopt.Optimizer)
    # set_silent(model)

    @variable(model, 0 <= w[1:size(R)[2]] <= 1)
    @constraint(model, sum(w) == 1)
    
    
    if objective == "minumum variance"
        @expression(model, quad_expr, w' * covMatrix * w)
        @NLobjective(model, Min, sqrt(quad_expr))
        if target != Nothing
            @NLconstraint(model, sum(w[i] * means[i] for i in 1:size(R)[2]) >= target)
        end
    elseif objective == "maximum sharpe"
        
        # Equation should be transformed to either scalar values or expressed in the form of expressions. 
        # N = size(covMatrix, 1) # either 
        # @NLobjective(model, Max, (sum(w[i] * means[i] for i in 1:N) - Rf) / sqrt(sum(w[i] * covMatrix[i, j] * w[j] for i in 1:N, j in 1:N))) # either
        @expression(model, quad_expr1, w' * means) # or
        @expression(model, quad_expr2, w' * covMatrix * w) # or
        @NLobjective(model, Max, (quad_expr1 - Rf)/sqrt(quad_expr2))
        if target != Nothing
            # @NLconstraint(model, sum(w[i] * means[i] for i in 1:size(R)[2]) >= target) # either
            @NLconstraint(model, quad_expr1 >= target) # or
        end
    else
        throw(ArgumentError("one of the available objective must be chosen"))
    end

    
    optimize!(model)

    portobj = objective_value(model)
    
    weights = value.(w)
    
    pmreturn = weights' * means

    colnames = names(R) # used only for naming array
    weights = NamedArray(weights, colnames, "Optimal Weights")


    return (preturn = pmreturn, obj = portobj, pweights = weights)
end


opt1 = PortfolioOptimize(R, "minumum variance", target = 0.1)
opt2 = PortfolioOptimize(R, "maximum sharpe", target = 0.15)

You can actually solve this using the new multi-objective support in JuMP: Multi-objective knapsack · JuMP

using JuMP
import DataFrames
import Gurobi
import MultiObjectiveAlgorithms as MOA
import Plots
import Statistics
df = DataFrames.DataFrame(
    bond = [0.06276629, 0.03958098, 0.08456482,0.02759821,0.09584956,0.06363253,0.02874502,0.02707264,0.08776449,0.02950032],
    stock = [0.1759782,0.20386651,0.21993588,0.3090001,0.17365969,0.10465274,0.07888138,0.13220847,0.28409742,0.14343067],
)
R = Matrix(df)
μ = vec(Statistics.mean(R; dims = 1))
Q = Statistics.cov(R)
model = Model(() -> MOA.Optimizer(Gurobi.Optimizer))
set_optimizer_attribute(model, MOA.Algorithm(), MOA.EpsilonConstraint())
set_optimizer_attribute(model, MOA.EpsilonConstraintStep(), 0.0001)
set_silent(model)
@variable(model, 0 <= w[1:size(R, 2)] <= 1)
@constraint(model, sum(w) == 1)
@variable(model, 0 <= variance <= sum(Q))
@constraint(model, variance >= w' * Q * w)
@expression(model, mean_return, μ' * w)
@objective(model, Min, [variance, -mean_return])
optimize!(model)
solution_summary(model)
Plots.scatter(
    [value(variance; result = i) for i in 1:result_count(model)],
    [value(mean_return; result = i) for i in 1:result_count(model)];
    xlabel = "Variance",
    ylabel = "Return",
    label = "",
)

Because it’s new there are a couple of bugs with Ipopt that I need to fix, so you’ll need to use Gurobi for now.

1 Like

Thanks very much for the answer. Could you please let me know which version of Gurobi you use? It cannot load for some reason. I’m using Julia version 1.8.5

julia> import Gurobi
[ Info: Precompiling Gurobi [2e9cd046-0924-5485-92f1-d5272153d98b]
ERROR: LoadError: SystemError: opening file "C:\\Users\\ploot\\.julia\\packages\\Gurobi\\FliRK\\deps\\deps.jl": No such file or directory
Stacktrace:
  [1] systemerror(p::String, errno::Int32; extrainfo::Nothing)
    @ Base .\error.jl:176
  [2] #systemerror#80
    @ .\error.jl:175 [inlined]
  [3] systemerror
    @ .\error.jl:175 [inlined]
  [4] open(fname::String; lock::Bool, read::Nothing, write::Nothing, create::Nothing, truncate::Nothing, append::Nothing)
    @ Base .\iostream.jl:293
  [5] open
    @ .\iostream.jl:275 [inlined]
  [6] open(f::Base.var"#387#388"{String}, args::String; kwargs::Base.Pairs{Symbol, Union{}, Tuple{}, NamedTuple{(), Tuple{}}})
    @ Base .\io.jl:382
  [7] open
    @ .\io.jl:381 [inlined]
  [8] read
    @ .\io.jl:462 [inlined]
  [9] _include(mapexpr::Function, mod::Module, _path::String)
    @ Base .\loading.jl:1484
 [10] include(mod::Module, _path::String)
    @ Base .\Base.jl:419
 [11] include(x::String)
    @ Gurobi C:\Users\ploot\.julia\packages\Gurobi\FliRK\src\Gurobi.jl:7
 [12] top-level scope
    @ C:\Users\ploot\.julia\packages\Gurobi\FliRK\src\Gurobi.jl:14
 [13] include
    @ .\Base.jl:419 [inlined]
 [14] include_package_for_output(pkg::Base.PkgId, input::String, depot_path::Vector{String}, dl_load_path::Vector{String}, load_path::Vector{String}, concrete_deps::Vector{Pair{Base.PkgId, UInt64}}, source::Nothing)
    @ Base .\loading.jl:1554
 [15] top-level scope
    @ stdin:1
in expression starting at C:\Users\ploot\.julia\packages\Gurobi\FliRK\src\Gurobi.jl:7
in expression starting at stdin:1
ERROR: Failed to precompile Gurobi [2e9cd046-0924-5485-92f1-d5272153d98b] to C:\Users\ploot\.julia\compiled\v1.8\Gurobi\jl_BBC4.tmp.
Stacktrace:
  [1] error(s::String)
    @ Base .\error.jl:35
  [2] compilecache(pkg::Base.PkgId, path::String, internal_stderr::IO, internal_stdout::IO, keep_loaded_modules::Bool)
    @ Base .\loading.jl:1707
  [3] compilecache
    @ .\loading.jl:1651 [inlined]
  [4] _require(pkg::Base.PkgId)
    @ Base .\loading.jl:1337
  [5] _require_prelocked(uuidkey::Base.PkgId)
    @ Base .\loading.jl:1200
  [6] macro expansion
    @ .\loading.jl:1180 [inlined]
  [7] macro expansion
    @ .\lock.jl:223 [inlined]
  [8] require(into::Module, mod::Symbol)
    @ Base .\loading.jl:1144
  [9] eval
    @ .\boot.jl:368 [inlined]
 [10] eval
    @ .\Base.jl:65 [inlined]
 [11] repleval(m::Module, code::Expr, #unused#::String)
    @ VSCodeServer c:\Users\ploot\.vscode\extensions\julialang.language-julia-1.38.2\scripts\packages\VSCodeServer\src\repl.jl:222
 [12] (::VSCodeServer.var"#107#109"{Module, Expr, REPL.LineEditREPL, REPL.LineEdit.Prompt})()
    @ VSCodeServer c:\Users\ploot\.vscode\extensions\julialang.language-julia-1.38.2\scripts\packages\VSCodeServer\src\repl.jl:186
 [13] with_logstate(f::Function, logstate::Any)
    @ Base.CoreLogging .\logging.jl:511
 [14] with_logger
    @ .\logging.jl:623 [inlined]
 [15] (::VSCodeServer.var"#106#108"{Module, Expr, REPL.LineEditREPL, REPL.LineEdit.Prompt})()
    @ VSCodeServer c:\Users\ploot\.vscode\extensions\julialang.language-julia-1.38.2\scripts\packages\VSCodeServer\src\repl.jl:187
 [16] #invokelatest#2
    @ .\essentials.jl:729 [inlined]
 [17] invokelatest(::Any)
    @ Base .\essentials.jl:726
 [18] macro expansion
    @ c:\Users\ploot\.vscode\extensions\julialang.language-julia-1.38.2\scripts\packages\VSCodeServer\src\eval.jl:34 [inlined]
 [19] (::VSCodeServer.var"#61#62")()
    @ VSCodeServer .\task.jl:484

Gurobi requires a license, so you probably didn’t install it correctly. Try import Pkg; Pkg.build("Gurobi").

Otherwise, if you wait an hour, you’ll be able to use Ipopt. I have it working locally, just adding tests and then I’ll push the fix.

Once New version: MultiObjectiveAlgorithms v0.1.3 by JuliaRegistrator · Pull Request #78209 · JuliaRegistries/General · GitHub is merged, you can update your packages to install MultiObjectiveAlgorithms v0.1.3. (It can sometimes take a while for the package servers to update, so if it doesn’t update to the new version, try again in an hour or so.)

Then this will work:

using JuMP
import DataFrames
import Ipopt
import MultiObjectiveAlgorithms as MOA
import Plots
import Statistics
df = DataFrames.DataFrame(
    bond = [0.06276629, 0.03958098, 0.08456482,0.02759821,0.09584956,0.06363253,0.02874502,0.02707264,0.08776449,0.02950032],
    stock = [0.1759782,0.20386651,0.21993588,0.3090001,0.17365969,0.10465274,0.07888138,0.13220847,0.28409742,0.14343067],
)
R = Matrix(df)
μ = vec(Statistics.mean(R; dims = 1))
Q = Statistics.cov(R)
model = Model(() -> MOA.Optimizer(Ipopt.Optimizer))
set_optimizer_attribute(model, MOA.Algorithm(), MOA.EpsilonConstraint())
set_optimizer_attribute(model, MOA.SolutionLimit(), 50)
set_silent(model)
@variable(model, 0 <= w[1:size(R, 2)] <= 1)
@constraint(model, sum(w) == 1)
@expression(model, variance, w' * Q * w)
@expression(model, expected_return, μ' * w)
@objective(model, Min, [variance, -expected_return])
optimize!(model)
solution_summary(model)
Plots.scatter(
    [value(variance; result = i) for i in 1:result_count(model)],
    [value(expected_return; result = i) for i in 1:result_count(model)];
    xlabel = "Variance",
    ylabel = "Return",
    label = "",
)

It’s a nice example, so I will add it to the JuMP documentation: [docs] add multiobjective portfolio example by odow · Pull Request #3227 · jump-dev/JuMP.jl · GitHub

1 Like

The tutorial in [docs] add multiobjective portfolio example by odow · Pull Request #3227 · jump-dev/JuMP.jl · GitHub has some additional code that helps you plot the allocation of each stock over the frontier.

It can produce graphs that look like:

3 Likes

Thank you so much. It’s very kind of you …

I’m very excited. I am testing it now, and hopefully, I will complete it tomorrow and share the result here. Will definitely add the plotting functionality to the function definition so that users can display it. The function will become way more advanced than I was ever imagining.

Once more, many thanks.

1 Like

Thanks very much, @odow,

It perfectly works; my understanding is that:

it gives the maximum expected_return for each level of variance, which is essentially the efficient frontier.

Before releasing the next version of PortfolioAnalytics.jl I want to ask one more question.

If I would like to add a non-linear expression, in case I want to add new functionalities to the function, is it possible? I want to do the same but with different objectives. If I want to obtain the maximum expected_return per given Sharpe ratio defined as expected_return / sqrt(variance) assuming the risk-free rate is zero.

For example, it fails when I want to add the below expression and objective. It is because of the square root, I believe. I think there is a specific syntax for MultiObjectiveAlgorithms, but I could not find it in the documentation.

@NLexpression(model, sharpe, expected_return / sqrt(variance))
@NLobjective(model, Max, [sharpe, expected_return])

Or should I use the output of this optimization, which will automatically give the expected_return per level of Sharpe something like the one below? But I am still curious whether there is any possible way of defining non-linear expressions and objectives.

[value(expected_return; result = i) for i in 1:result_count(model)] ./ sqrt([value(variance; result = i) for i in 1:result_count(model)])

image

Thank you

You can’t define a vector-valued nonlinear objective in JuMP yet (although I have plans to fix this).

You can work-around the issue using an epigraph variable. Instead of max f(x), use max t; t <= f(x).

model = Model(() -> MOA.Optimizer(Ipopt.Optimizer))
set_optimizer_attribute(model, MOA.Algorithm(), MOA.EpsilonConstraint())
set_optimizer_attribute(model, MOA.SolutionLimit(), 50)
set_silent(model)
@variable(model, 0 <= w[1:size(R, 2)] <= 1)
@constraint(model, sum(w) == 1)
@expression(model, variance, w' * Q * w)
@expression(model, expected_return, μ' * w)
@variable(model, sharpe)
@NLconstraint(model, sharpe <= expected_return / sqrt(variance))
@objective(model, Max, [sharpe, expected_return])
optimize!(model)
solution_summary(model)
1 Like

@odow , I am just writing to THANK YOU.

Everything is way beyond what I imagined. The below is just a demonstration of my excitement. Hopefully, starting from the next week, the wider Julia community will enjoy this and other financial functions to perform quantitative portfolio analytics.

Thanks very much again. I wouldn’t be able to do it without your and JuMP.jl’s help.

opt1 = PortfolioOptimize(R, "minumum variance")
opt1.plt

image

opt2 = PortfolioOptimize(R, "maximum sharpe", target = 0.04)
opt2.plt

image

julia> opt_weights = opt2.pweights
2-element Named Vector{Float64}
Optimal Weights  │
─────────────────┼───────
bond             │ 0.6481
stock            │ 0.3519

Plots.bar(names(opt_weights), opt_weights, labels = false)

image

1 Like