Storage of a diagonal matrix (and best way to update)?

I have a typical situation,

P is an m x n matrix
Q is an m x m matrix, but only has diagonal elements, i.e.

Q[i,j] = q[k], i=j=k
Q[i,j] = 0, i != j

and eventually I have to multiply Q and P,

R = Q * P

Naturally, m in my situation is a fairly large number, ~1000, which, if Diagonal is actually stored as a full Matrix is a million elements. I really expect that it is NOT stored that way but is stored as a Diagonal type which is associated with a vector of 1000 elements, i.e. it uses 1000 memory locations.

The values of Q are going to be updated via iteration, therefore allocate once and update is the way to go. There are two ways to do this,

Create the diagonal matrix,

Q = Diagonal(q)

and then update directly,

for i=1:m 
  Q[i,i] = update(i)
R = Q * P

or update the base vector and create the Diagonal again before multiplying,

for i=1:m
  q[i] = update(i)

R = Diagonal(q) * P

which of course seems like a bad idea since i’m creating the Diagonal instance on every iteration…

I’m really expecting the right answer to be “directly update the Diagonal matrix Q”, just wanted to be sure.

I don’t think it’ll matter. Diagonal(q) doesn’t make a copy of q — it’s just a simple wrapper that exposes the vector as a matrix with it on the diagonal. These sorts of things are very frequently no-cost abstractions. I’d just do whichever feels most natural.

1 Like

As @mbauman says, for D = diagonal(q), if you update the q matrix it will update D (since D just refers to q without making a copy).

If you care about this sort of thing, you should also consider using mul!(R, Q, P) so that you perform the matrix multiplication in-place.

1 Like

Thanks, that’s a very important detail. This is an iterative algorithm so I will be multiplying a fairly large matrix many times.

Which reminds me, there’s also an svd! routine.