# Hugely different solutions between JuMP/Ipopt and a commercial solver

I am using a commom portfolio optimization problem (maximizing the Sharpe ratio) to improve my knowledge of non-linear optimization using JuMP and Ipopt. The objective is `Max pmean' w / sqrt(w' pcov w)` subject to bound and budget constraints. I know one can transform this problem into a convex one but I want to keep it non-linear for the aforementioned reason.

In the bellow problem, I am getting a hugely different solution `[0.5297239911794952, 0.11995753757471644, 0.35031847124578847]` from the solution obtained with a commercial solver `[0.552311434687384, 0.189781021718651, 0.257907543594164]`. The optimised Sharpe ratio (4.385548808083498) is also lower than the one computed by the commercial solver (4.494859481172453).

``````pmean = [0.3, 0.1, 0.5]
pcov = [0.01 -0.010 0.004; -0.010 0.040 -0.002; 0.004 -0.002 0.023]
r = 0.0
n = 3

model = Model(Ipopt.Optimizer)
@variable(model, w[1:n] >= 0)
@constraint(model, sum(w) == 1)
@NLobjective(model, Max, sum(w[i] * (pmean[i] - r) for i in 1:n) / sqrt(sum(w[i] * pcov[i,j] * w[j] for i in 1:n, j in i:n)))
optimize!(model)

println(value.(w), objective_value(model))
``````

Since I don’t find anything obviously stupid with my problem setup, I am wondering if it would be possible to improve convergence by changing some settings in JuMP (but keeping the non-linear nature of the problem). Thank you in advance!

Ipopt only finds a global optima if the problem is convex and twice-differentiable. The `1 / sqrt(x)` term violates this assumption when `x = 0`.

The conic formulation is much “nicer” to solve: 3 Conic quadratic optimization — MOSEK Modeling Cookbook 3.3.0

``````julia> function sharpe(pmean, pcov, r)
n = length(pmean)
model = Model(Ipopt.Optimizer)
set_silent(model)
@variable(model, w[1:n] >= 0)
@constraint(model, sum(w) == 1)
@NLobjective(model, Max, sum(w[i] * (pmean[i] - r) for i in 1:n) / sqrt(sum(w[i] * pcov[i,j] * w[j] for i in 1:n, j in i:n)))
optimize!(model)
return value.(w), objective_value(model)
end
sharpe (generic function with 1 method)

julia> function sharpe2(pmean, pcov, r)
n = length(pmean)
model = Model(Ipopt.Optimizer)
set_silent(model)
@variables(model, begin
y[1:n] >= 0
z >= 0
t >= 0
end)
@objective(model, Min, t)
@constraints(model, begin
t^2 >= sum(y[i] * pcov[i, j] * y[j] for i = 1:n, j = 1:n)
sum(y) == z
sum((pmean[i] - r) * y[i] for i in 1:n) == 1
end)
optimize!(model)
x = value.(y) / value(z)
return x, (pmean' * x .- r) / sqrt(x' * pcov * x)
end
sharpe2 (generic function with 1 method)

julia> sharpe(pmean, pcov, r)
([0.5297239911794952, 0.11995753757471644, 0.35031847124578847], 4.385548808083498)

julia> sharpe2(pmean, pcov, r)
([0.5523114349199668, 0.18978102825961438, 0.2579075368204186], 4.494859481172451)
``````
4 Likes

Indeed, it is much nicer. Thank you very much @odow!

1 Like

Now I am wondering that if one changes “slightly” the budget constraint into

`@constraint(model, 0.7 <= sum(w) <= 0.9)` # instead of `@constraint(model, sum(w) == 1)`

the aforementioned method appears not to be available anymore. Is there some general method to make this kind of problem more amenable for JuMP/Ipopt? Thanks in advance!

1 Like

Ipopt works best if the problem is convex and differentiable. Modeling is an art, and it largely comes down do knowing a bag-of-tricks that you can deploy in different situations.

Without thinking too deeply for hidden consequences, you probably want to add a new “w” variable with a 0 rate of return with bounds 0.1 and 0.3.

3 Likes