Could this be faster using some decomposition?

Hi,

This is more maths than Julia optimisation, but I do not recall if there is a trick in the linear algebra book to make this more efficiently.
I have a covariance matrix M, of size (2000,2000), and i want to normalize vectors w.r.t. this matrix :

function rand_normalized(M)
   e = abs.(randn(2000))
   return e ./ sqrt(e'M*e)
end

I am using this function inside a loop and therefore if there is some precomputations depending on M that we can do, lets do them.

Any idea ?

You could cut down on allocations with

e = randn(2000)
e .= abs.(e)
e .*= 1/sqrt(dot(e, M, e))
return e

which only allocates 1 array rather than the 4 in your code above.

Yeah i already have done technical stuff, this runs through LoopVectorization, so no allocations etc… I was more thinking about a math trick. I only presented to you a ‘readable’ version.

If you precompute a Cholesky factorization M = R^*R (since M must be SPD), then you could use norm(R*x) instead. This saves half of the flops because half of R is zero.

1 Like

That’s it. Thanks a lot this is perfect.

Can’t we simplify even more using the fact that e is gaussian ? Hum maybe not due to the abs


Hum… my M does not seem to be sdp… (although it should be!) What could i do for a matrix that is not sdp ?