I have the following code:

```
M = s*I-SS.A
G = SS.C*inv(M)*SS.B + SS.D
```

It calculates the transfer function of a linear system in state space form.

Can I avoid inverting the matrix M? I heard that this can result in bad accuracy

and should be avoided.

I am not sure to what extend it improves the accuracy but generally it is preferred to use an appropriate matrix factorization of M and the use one of the division operators to apply the inverse. See docs of `factorize`

for some more info on what factorization to use. It looks like your matrix might be positive definite so you should use `cholesky!`

to factorize.

```
M = s*I-SS.A
Mfact = cholesky!(M) # lu! or other suitable other factorization
G = SS.C*(Mfact(M)\SS.B) + SS.D
```

It’s less a question of accuracy (the loss of accuracy is usually minor), and more a question of efficiency. Inverting a matrix to compute `A^-1 * b`

is simply a lot of pointless extra effort compared to directly solving the system via `A \ b`

(which forms a factorization of `A`

, but does not form the inverse).

For example, with `A = randn(1000,1000)`

and `b = randn(1000)`

, inverting the matrix is slower by a factor of nearly 3:

```
julia> @btime inv($A) * $b;
14.184 ms (6 allocations: 8.13 MiB)
julia> @btime $A \ $b;
5.108 ms (4 allocations: 7.64 MiB)
```

Whereas if you are computing `C * A^-1 * B`

where `C`

and `B`

are also 1000x1000 matrices, the slowdown is proportionately less:

```
julia> @btime $C * inv($A) * $B;
31.570 ms (9 allocations: 23.38 MiB)
julia> @btime $C * ($A \ $B);
22.695 ms (7 allocations: 22.90 MiB)
```

(It is much worse when `A`

is sparse, because inverting the matrix throws away all the sparsity whereas factorizing it does not.)

5 Likes

`ControlSystemsBase.freqresp`

uses the Hessenberg factorization to efficiently compute C(i\omega I - A)^{-1}B + D for multiple different \omega, is there a reason you can’t just call `freqresp(SS, w)`

? Or if you have a complex `s`

not on the imaginary axis, `evalfr(SS, s)`

6 Likes