Julia vs NumPy broadcasting

It works on macOS, too. Only major operating system currently missing is Windows.

The roundoff errors should grow as O(\sqrt{n}) on average. In double precision, with n of a few thousand, it’s probably not worth worrying about?

(It’s the same error growth as computing a sum by repeated addition, and most people don’t worry about that either, though in a library function like sum we try to do better.)

That being said, if it’s a problem you could always recompute e^{kA} explicitly via exp(k*A) every N steps for some value of N, or there are other schemes to trade off computation for accuracy.

One option in that case is @mikmoore’s augmented-matrix approach, which handles arbitrary singular (and defective) A but is more expensive.

However, if you are willing to regularize the eigenvalues (i.e. change the solution when A is nearly singular, since I guess this is only an intermediate step in the optimization), maybe you could apply a regularization directly to the matrix (or your parameterization thereof) or to your system of equations. For example, you could replace A \ B in my code above with [A; α*one(A)] \ [B; 0*B] for some small value of α, e.g. α = 1e-8 — this is a Tikhonov regularization, and is equivalent to A \ B for \alpha \ll \Vert A \Vert.

(For optimization, even derivative-free optimization, you really want to keep things smooth as a function of the parameters. For example, a Tikhonov regularization is smooth as a function of A, whereas arbitrarily changing the eigenvalues when they get too small may not be, depending on how you do it.)

1 Like

If A is a stable matrix (real(eigvals(A)) < 0), any numerical error or difference in initial condition etc. should be forgotten exponentially with a rate given by the real part of the eigenvalues of A?

Only in an absolute-error sense — the relative error still accumulates if you compute e^{Ak} by repeated multiplication.

2 Likes