Repeated multiplication of square matrix

Yes, I’ve seen these formulas for transfer impedance in terms of cofactor determinants, which are about as elegant and utterly impractical as Cramer’s rule (from which they are derived). Computationally, you seem much better off (from both an accuracy and an efficiency perspective) just computing the LU factorization of your matrix (the lu(A) function) and then re-using it to solve a sequence of right-hand-sides. (But you still might be better off with an S-matrix formulation, depending on your problem.)

You could easily run into an intrinsic problem here: the product of many transmission matrices (T matrices) tends to be extremely ill-conditioned, which makes it hard to get accurate answers by any method

well yeah, exactly. however i don’t actually know that i have that problem yet. i may certainly be doing something wrong.

this is a multiport network so you can’t just cast it as a combination of two-port networks or something. i’d have to use segmentation or some other method to combine s-parameters for each cell.

This matrix is extremely dense if solved in that way, so no sparsity savings which means you are stuck with O(n^3). and n is going up as n^2, because 2D. so this gets slow in a hurry.

i have a SPICE version of this working and 2x the grid density causes a significant increase in run-time. however the SPICE overhead might be part of the problem, so i may just roll my own modified nodal admittance matrix and do that.

this method has the promise of being much faster - no jokes about faster and wrong please :wink:

i have n^2 elements, so solving a matrix is going to be (n^2)^3. or i can multiply an n^2 matrix n times, so only n^3 operations (i think…)

quite curious as to how that formulation would be implemented ?

LOL. exactly…

That’s not necessary to use S-matrix methods. For the general PDE form of Maxwell’s equations, we apply S-matrix methods and the Redheffer product to infinitely many ports (infinitely many incoming and outgoing wave channels), discretized on the computer to a large number of ports; see the reference I linked.

Solving an n²×n² system Ax = b takes O(n⁶) time in general. (Ignoring impractical theoretical methods.) Multiplying such a matrix by a vector in general takes O(n⁴) time, but I’m not sure where you’re getting n³ from. My understanding is that in order to get a transfer impedance you need to solve Ax=b. However, if you need to solve multiple right-hand sides (many impedances) for the same network, you can compute the LU factorization of A once in O(n⁶) time and then solve each new right-hand side in O(n⁴) time.

If your matrix is sparse or has some similar special structure, you can do much better.

definitely not sparse. components at every single node.

I wish i could, can’t get to it. I’m really curious…

Solving an n²×n² system Ax = b takes O(n⁶) time in general. (Ignoring impractical theoretical methods.) Multiplying such a matrix by a vector in general takes O(n⁴) time, but I’m not sure where you’re getting n³ from.

DOH! that’s right. ok. still a big improvement. However, if i then have to turn around and solve the matrix for a right-hand-side to actually get my driving point impedance, i incur the O(n^3) penalty to actually get my answer… and the source matrix is still n^2 in size.

This may not be such a great idea…

i think where this method benefits in the case of a board which is mxn. you can cascade the 2nx2n matrices m times, and the resulting matrix is 4n^2 instead of mxn.

Only if the determinant is positive and real I suppose.

For negative determinants, there is LinearAlgebra.logabsdet, which also returns the sign. This is what I would use in practice for generic matrices.

That said, complex elements are of course not a problem, as log is defined for \mathbb{C}.

2 Likes

An abbreviated summary can be found in these notes by the author of the Stanford S4 code, which uses S-matrix methods to solve Maxwell’s equations (discretized to hundreds or thousands of “ports”) for multilayered patterned media. (Although the notes refer to “two ports,” each “port” is a vector of complex amplitudes of incoming/outgoing channels, so it is really 2n ports.)

1 Like

Multilayer transfer matrix can also be understood in terms of orthogonal groups and hyperbolic geometry

This means, that instead of using matrices, another possible approach would be to use multivectors and geometric algebra instead of plain matrices. I’m currently working on a package for that

I’m inerested in exploring applications where matrices can be replaced with multivectors. For example, the Maxwell equations can be reduced to a single equation when geometric algebra is used.

1 Like

well now. things just got interesting.

i managed to make this work. the very obvious, glaring error i was making was to not check the method at reduced resolution. I did that and got an excellent match against a different method i was using to solve the problem.

and then it blows up. as resolution is increased and the problem size grows the numerical accuracy grows also and eventually it returns garbage. not surprisingly that transition is fairly sharp.

what is really discouraging is that i cannot reproduce the results of the thesis. at the resolution used in the paper for the example problem the result is large-valued noise. the author must have used a specialized numerical technique to get around this problem but there is no mention of this in the thesis.

I was researching this problem and ran across the Cayley hamilton theorem. It looks like a distinct possibility. it immediately made me wonder, what is julia using to calculate A^n ?

last thing. i’ve been researching and looking for a way to calculate some metric which might give me an idea of how fast the error might grow, but haven’t really found anything. if someone is aware of such a metric please let me know !

Thanks !

The best course may be contacting the author then — people are usually happy to share code.

Also, I think reading

@article{moler2003nineteen,
  title={Nineteen dubious ways to compute the exponential of a matrix, twenty-five years later},
  author={Moler, Cleve and Van Loan, Charles},
  journal={SIAM review},
  volume={45},
  number={1},
  pages={3--49},
  year={2003},
  publisher={SIAM}
}

may be helpful if you want to understand the numerical issues.

This is easy to check (eg @edit A^10), and the answer is power by squaring.

5 Likes

paper is very old, having trouble getting contact information for the author.

ty!