This is more maths than Julia optimisation, but I do not recall if there is a trick in the linear algebra book to make this more efficiently.
I have a covariance matrix M, of size (2000,2000), and i want to normalize vectors w.r.t. this matrix :

function rand_normalized(M)
e = abs.(randn(2000))
return e ./ sqrt(e'M*e)
end

I am using this function inside a loop and therefore if there is some precomputations depending on M that we can do, lets do them.

Yeah i already have done technical stuff, this runs through LoopVectorization, so no allocations etc… I was more thinking about a math trick. I only presented to you a ‘readable’ version.

If you precompute a Cholesky factorization M = R^*R (since M must be SPD), then you could use norm(R*x) instead. This saves half of the flops because half of R is zero.