I’m trying to use Julia (0.5) and Convex.jl (with ECOS solver) to figure out, given a portfolio of 2 stocks, how can I distribute my allocations (in percent) across both stocks such that I maximize my portfolio return and minimize my risk (std dev of returns). I want to maximize what is known as the Sharpe ratio that is a calculation driven from what percentages I have in each of my 2 stocks. So I want to MAXIMIZE the Sharpe ratio and have the solver figure out what is the optimal allocation for the two stocks (I want it to tell me I need x% of stock 1 and 1-x% of stock 2). The only real constraint is that the sum of the percent allocations adds to 100%. I have code below that runs, but does not give me the optimal weights/allocations I’m expecting (which is 36.3% for Supertech & 63.7% for Slowpoke). The solver instead comes back with 50/50.

My intuition is that I either have the objective function modeled incorrectly for the solver, or I need to do more with constraints. I don’t have a good grasp on convex optimization so I’m winging it. Also, my objective function uses the variable.value attribute to get the correct output and I suspect I need to be working with the Variable expression object instead.

Question is, is what I’m trying to achieve something the Convex solver is designed for and I just have to model the objective function and constraints better, or do I have to just iterate the weights and brute force it?

Code with comments:

using Convex, ECOS
Supertech = [-.2; .1; .3; .5];
Slowpoke = [.05; .2; -.12; .09];
A = reshape([Supertech; Slowpoke],4,2)
mlen = size(A)[1]
R = vec(mean(A,1))
n=rank(A)
w = Variable(n)
c1 = sum(w) == 1;
λ = .01
w.value = [λ; 1-λ]
sharpe_ratio = sqrt(mlen) * (w.value' * R) / sqrt(sum(vec(w.value' .* w.value) .* vec(cov(A,1,false))))
# sharpe_ratio will be maximized at 1.80519 when w.value = [λ, 1-λ] where λ = .363
p = maximize(sharpe_ratio,c1);
solve!(p, ECOSSolver(verbose = false)); # when verbose=true, says is 'degenerate' because I don't have enough constrains...
println(w.value) # expecting to get [.363; .637] here but I get [0.5; 0.5]

if you are interested, I did something similar in C++ using QuadProg++, a library that implements the algorithm of Goldfarb and Idnani for the solution of a (convex) Quadratic Programming problem by means of an active-set dual method.
I didn’t yet know Julia at that time… I plan to port it in Julia, but I’m too busy at the moment

Hi Antonello,
I’ve got something working WITHOUT convex optimization currently (in Julia) but it is brute force. I may download your C++ code and see if I can get that working. Thanks for that! If I figure it out in Julia, I’ll drop you a line.

You don’t have to predefine the value of the variable w. Since you have defined w.value, the solver is effectively solving the constraint that you have specified.

Now, let’s assume that w is a variable. The objective function sharpe_ratio is not DCP-compliant therefore, Convex.jl won’t be able to reach the correct solution. But there are methods to convert the sharpe_ratio into a DCP-compliant form. I will be posting about it once I go through the theory.

Meanwhile, you can go through the portfolio optimization examples in Convex.jl for your reference.

Let μ be the expected returns and V the variance matrix.
Maximizing the Sharpe ratio is the problem

Find w
To maximize μ' w / sqrt( w' V w )
Such that 1' w = 1
w ≥ 0

It is not a convex problem, but can be transformed into one as follows.
Notice that the objective is homogeneous of degree zero: we can replace the normalization constraint 1’w=1 with another one, and rescale the solution at the end. For instance, let us use μ'w=1 to get rid of the numerator in the objective. The problem becomes

Find w
To maximize 1 / sqt( w' V w )
Such that μ' w = 1
w ≥ 0

or, equivalently,

Find w
To minimize w' V w
Such that μ' w = 1
w ≥ 0

The problem is now convex.

using Convex
A = hcat( [-.2, .1, .3, .5], [.05, .2, -.12, .09] )
μ = mean(A,1)[1,:]
V = cov(A)
n = length(μ)
w = Variable(n)
p = minimize( quadform(w,V), [ μ'w == ones(n); w >= zeros(n) ] )
solve!(p)
w.value / sum(w.value) # [ 0.363029, 0.636971 ]

I’ve got a standard MBA background with the standard Calculus, Statistics (business), etc. additionally with undergrad Discrete Math and am now armed (and dangerous) with a modern Linear Algebra text.

Can you recommend a learning path to follow to better understand Convex Optimization short of getting a PhD in maths?

You can download a pdf version of the standard Boyd and Vandenberghe Convex Optimization textbook at Convex Optimization – Boyd and Vandenberghe. There are links there to video lectures on youtube that are getting kind of old now, I thought there might be newer videos somewhere but I’m not finding them in a quick search.

As a general reference for convex programming, I like Lectures on modern convex optimization (A. Ben-Tal and A. Nemirovski, 2015): the authors present the various types of cone programming (most convex optimization problems encountered in practice are “cone programs”) and explain how to transform many problems into this form.

Anything on disciplined convex programming (that is what is behind Convex.jl) may be easier to read and more directly applicable.