I am using a commom portfolio optimization problem (maximizing the Sharpe ratio) to improve my knowledge of non-linear optimization using JuMP and Ipopt. The objective is Max pmean' w / sqrt(w' pcov w) subject to bound and budget constraints. I know one can transform this problem into a convex one but I want to keep it non-linear for the aforementioned reason.
In the bellow problem, I am getting a hugely different solution [0.5297239911794952, 0.11995753757471644, 0.35031847124578847] from the solution obtained with a commercial solver [0.552311434687384, 0.189781021718651, 0.257907543594164]. The optimised Sharpe ratio (4.385548808083498) is also lower than the one computed by the commercial solver (4.494859481172453).
pmean = [0.3, 0.1, 0.5]
pcov = [0.01 -0.010 0.004; -0.010 0.040 -0.002; 0.004 -0.002 0.023]
r = 0.0
n = 3
model = Model(Ipopt.Optimizer)
@variable(model, w[1:n] >= 0)
@constraint(model, sum(w) == 1)
@NLobjective(model, Max, sum(w[i] * (pmean[i] - r) for i in 1:n) / sqrt(sum(w[i] * pcov[i,j] * w[j] for i in 1:n, j in i:n)))
optimize!(model)
println(value.(w), objective_value(model))
Since I don’t find anything obviously stupid with my problem setup, I am wondering if it would be possible to improve convergence by changing some settings in JuMP (but keeping the non-linear nature of the problem). Thank you in advance!
the aforementioned method appears not to be available anymore. Is there some general method to make this kind of problem more amenable for JuMP/Ipopt? Thanks in advance!
Ipopt works best if the problem is convex and differentiable. Modeling is an art, and it largely comes down do knowing a bag-of-tricks that you can deploy in different situations.
Without thinking too deeply for hidden consequences, you probably want to add a new “w” variable with a 0 rate of return with bounds 0.1 and 0.3.