# Repeated multiplication of square matrix

#1

I’m currently implementing the following loop:

a=computation()
b = similar(a)
b .= a
for i=1:n
b .=  a * b
end


i.e. a^n.

I should mention that a is complex, and that the determinant of a is 1 to start.
at the conclusion of the computation i need the determinant of a portion of a, the lower left quadrant. as you would suspect, i’m getting NaN for that result. so after some investigation… i found that the determinant of the product slowly diverges from 1 until it becomes a non-sensical value.

I’m wondering if this is due to a precision problem in the operations and so maybe

1 there is a julia function to perform a^n (that i am unable to find)
2 maybe i should be trying to do this a different way
3 maybe i shouldn’t be trying to do this

trying to recreate the results of a paper. wouldn’t you know the author mentions nothing about a potential problem in the operation…

Thanks!

#2

a^n should work.

#3

lol. well duh, and so it does work. thank you.

sadly det(a^n) = NaN + NaN*im

ugh… this is going to be a bit of chore to figure out.

The elements of the matrix are definitely getting quite large, not unexpected since i’m essentially raising this thing to the N power, but I wonder how the author ever get it to work…

#4

Does the matrix have any special structure or properties (e.g. is it Hermitian)? You may be able to do it via some sort of decomposition, or using a matrix exponential (though you need to be fairly careful about numerical stability).

#5

e.g. if it is an orthogonal matrix (since you said the determinant is 1, though this itself is not a sufficient condition to be orthogonal) then you should be able to diagonalize it, then you can just raise the powers of the diagonal elements.

#6

Well it is quite sparse. Also the “quadrants” are strongly diagonal i.e. they have terms only on the diagonal and +/-1 column off the diagonal.

The author does mention that it is sparse but doesn’t mention any particular technique being used.

clearly I’m going to have to spend some time refreshing my memory on matrix properties this weekend

#7

I would also suggest asking the author: they may be able to share the code, or at least provide more info.

#8

yes, I was going to try that. the paper is from 2002, so that may not work too well, but I’m going to try and see if I can locate him.

#9

Maybe you could renormalize the product after each iteration such that its determinant is 1, based on the property that \det(cA) = c^n \det(A) for n \times n matrix A.

#10

The determinant calculations can quickly overflow the maximum floating point value for a matrix of any significant size and magnitude, and the resulting Inf - Inf computations give NaN. Even if the matrix starts out unitary, roundoff errors will make A^n non-unitary exponentially fast. (In practice, one often goes to great lengths to avoid computing large matrix powers explicitly.)

My basic question here is why you are computing the determinant? It is rarely needed for practical calculations. Just as a test of matrix powers?

#11

i agree.

i fully expect that i do not need to use the determinant, however…
the paper (because it’ a paper, grrr…) uses the determinant. i was hoping to calculate the necessary value using the det as a check.

i took the fact that the determinant was coming back NaN to mean that i was probably losing significant accuracy in the matrix multiplications. but maybe not. so if i’m use an alternate method and get the wrong answer, is it numerical instability in the multiplication process or is it because my calculation is incorrect ?

so i have a plan to check and see if the calculations are correct
if they are and the det is still not cooperating i’ll try renormalizing.
if that doesn’t work i’ll use an alternate method for the calculation.

nothing but work, work, work…

#12

Which paper? In general, it’s helpful if you explain what underlying problem you are trying to solve.

#13

If the matrix is not terribly large, perhaps try an eigenvalue decomposition. The determinant is the product of the eigenvalues, which you can raise to the power of n each first. If the numbers get too large when powering and multiplying, try BigFloat.

#14

I tried the renormalization I proposed. It doesn’t work so well, probably because the magnitude of the matrix elements still grows quickly. Here’s my code:

using LinearAlgebra
using Random
using SparseArrays
using Test

function normalized_matrix_power(A::AbstractMatrix, n::Integer)
n >= 1 || error()
LinearAlgebra.checksquare(A)
m = size(A, 1)
B = copy(A)
for _ = 2 : n
B *= A
@show det(B)
@show maximum(abs, B)
B ./= det(B)^(1/m)
@show det(B)
println()
end
return B
end

rng = MersenneTwister(1)
n = 100
p = 1000 / n^2
A = sprand(rng, n, n, p)
A ./= det(A)^(1/n)
@test det(A) ≈ 1

B = normalized_matrix_power(A, 15)
@test det(B) ≈ 1


#15

No. Probably you are getting NaN because determinants are usually terrible things in a computational setting that you should almost never compute explicitly for large matrices. Both determinants and the intermediate calculations involved can easily over/underflow the maximum/minimum possible floating-point number, resulting in Inf or NaN, even if the underlying matrix is fine. Doing this for A^n is even worse as n grows.

There are lots of examples where the matrix is perfectly accurate, but naively computing determinants is a disaster. The most famous is probably eigenproblems, where trying to compute characteristic polynomials by determinants can lose accuracy exponentially fast if you are not careful, even though the underlying eigenproblem can be solved accurately by better methods for the same matrix.

That’s why I keep asking you to explain what you are trying to do. What paper are you implementing? What problem are you solving? There’s almost certain to be a better approach than computing det(A^n).

#16

I agree that determinants are rarely a good idea to compute, but if it’s just about over/underflow, logdet is a reasonable solution.

#17

yes and i keep trying to NOT put 2 pages of explanation into the forum. lol.

I already know that i don’t need the det as a method to get the answer. i was trying it as a starting point.

i’ll try to provide the codensed version of what’s going on (put on your electrical engineering hat)

1 given a printed circuit board - discretize the board so that it can be expressed as per unit area lumped-element equivalents
2 take the lumped element equivalent and create what is called a “transmission matrix”.
3 the transmission matrix does exactly what you think it should, you chain N of them to represent the entirety of the board
4 take the resultant transmission matrix and use the determinant to calculate the impedance looking into the board at a particular unit cell.

as i said, i was using step 4 simply because that’s how the answer was represented. i KNOW it’s a terrible idea, and it’s a common problem in circuit theory because it’s very easy to represent the anwer to these sorts of problems as a calculation of the determinant (with row and column modifications).

the correct answer is to set-up solution vectors and cycle through them to get the answer but that is not a succinct way to put it in papers. comically, i thought using the det would be easier.

so the real problem is that i’m not 100% sure i’m generating the correct values ithe matrix. I know i’m generating the individual values for the matrix correctly, I know they are in the right locations.

however, the paper’s formulation may be wrong ( i went through the algebra this morning and i don’t think it is).

so my next task, is more correctness verification. the matrix values should absolutely not blow up. so i have a plan for a reduced set of values so i can explicitly test the value progression as i chain the matrices.

as to why i’m sure the values should not blow up - using a capacitor as a simple example. Chain N capacitors - you don’t get a value that is ~ C^n you get a value that is ~ C*n.

I need to look to look at a reduced problem space and make sure that’s what is happening.

safe to say, i’ve learned my lesson. stay away from determinants !

#18

Maybe just link to the paper you’re trying to implement?

#19

Because I thought i had specific numerical (and Julia) related questions and, really, my questions were answered.

1 should i be using a special function to do repeated multiplications ? the answer seems to be no, there is no specialized routine to do cascaded multiplications

2 determinants are bad, and can be bad even if the matrix data appears good. so i’m dispensing with the determinant as the calculation method and going to plan B.

Really, the rest is up to me. I’ve made some progress on checking my matrix construction, and now understand why some of my values seem to be growing larger even though it didn’t seem to me like they should be (per my example with the capacitor). I’m still finding that result to be very unintuitive so i have more work to do…

However if somebody really wants to see what I’m trying to do…

#20

You could easily run into an intrinsic problem here: the product of many transmission matrices (T matrices) tends to be extremely ill-conditioned, which makes it hard to get accurate answers by any method. Instead, I would typically recommend using scattering matrices (S matrices). A wrinkle is that S matrices can’t be combined just by ordinary matrix multiplication, but are combined with what some authors call a Redheffer star product (basically just an easy recombination of submatrices).

(I run into this all the time in Maxwell simulations, and switching to S matrices is a standard approach since T matrices are a disaster for many layers.)