I am trying to solve an optimization problem (obtain the efficient frontier of optimal portfolios by maximizing expected return per given level of sharpe ratio). However, the solution runs infinitely (though not always, depending on the data).
Sharpe ratio is defined as (expected_return - Rf) / sqrt(variance)
and would like to maximize expected_return
for a given level of sharpe ratio
.
Any help is much appreciated.
# packages
using JuMP # v1.9.0
using Ipopt # v1.2.0
using MultiObjectiveAlgorithms # v0.1.4
using Statistics
To reproduce the example, I provided the contents of the μ
and Q
below.
# the content of μ and Q
julia> μ = Vector{Float64}(vec(Statistics.mean(R; dims = 1)))
2-element Vector{Float64}:
0.006898463772627643
-0.02972609131603086
julia> Q = Statistics.cov(R)
2×2 Matrix{Float64}:
0.030446 0.00393731
0.00393731 0.00713285
# Optimization
μ = Vector{Float64}(vec(Statistics.mean(R; dims = 1)))
Q = Statistics.cov(R)
Rf = 0
model = Model(() -> MultiObjectiveAlgorithms.Optimizer(Ipopt.Optimizer))
# set_silent(model)
set_optimizer_attribute(model, MultiObjectiveAlgorithms.Algorithm(), MultiObjectiveAlgorithms.EpsilonConstraint())
set_optimizer_attribute(model, MultiObjectiveAlgorithms.SolutionLimit(), 25)
@variable(model, 0 <= w[1:2] <= 1)
@constraint(model, sum(w) == 1)
@expression(model, variance, w' * Q * w)
@expression(model, expected_return, w' * μ)
@variable(model, sharpe)
@NLconstraint(model, sharpe == (expected_return - Rf) / sqrt(variance))
@objective(model, Max, [expected_return, sharpe])
optimize!(model) # => This infinitely runs