Square-Root Kalman-Filter: Calculate only upper triangular of QR-decomposition

I’m implementing the Square-Root Kalman-Filter similar to this python implementation:
Python - Square-Root Kalman-Filter
The implementation calculates the QR-decomposition, but it only needs the upper triangular matrix R (see here). The Q is not needed. I searched the internet to see, if there was an efficient implementation to just calculate the matrix R and not Q and found this matlab page: Matlab qr
It states: To calculate the upper-triangular factor R and permutation matrix P , but avoid computing the orthogonal matrix Q (which is often the most computationally expensive part of a call to qr ), you can specify B as an empty matrix:

emptyB = zeros(size(S,1),0);
[~,R,P] = qr(S,emptyB);

I got curious and tried to find something similar in Julia. The closest I could get was the following:

A = randn(10,5)
B = zeros(10,1)
LAPACK.gels!('N', copy(A), B)

This results in the same output for R as qr(A)
And indeed it is faster:

@btime LAPACK.gels!($'N', W, $B) setup=(W=copy(A)) evals=1 # 2.238 μs (7 allocations: 2.23 KiB)
@btime qr!(W) setup=(W=copy(A)) evals=1 # 3.772 μs (3 allocations: 608 bytes)

I have two questions:

  1. Why are there so many allocations even though LAPACK.gels! is an inplace function?
  2. Is there a better way to calculate the matrix R? I tried to use a zero sized B as suggested by the matlab page: LAPACK.gels!('N', copy(A), zeros(10,0)), but this does not provide the correct result.

I’d like to make the package publicly available as soon as it is finished.

Are you sure you actually need the square root form? I find that by simply symmetrizing the covariance matrix in the right places, you get almost the same numerical performance with a textbook implementation.

I discussed the need of the square root KF with some industry people some 10 years ago (defence industry). Their point of view was that the square root KF is useful if one does calculations in Float32, but is not needed with Float64.

3 Likes

According to this paper The square-root unscented Kalman filter for state and parameter-estimation the square root form for the Unscented-Kalman-Filter has the following advantages:

  • better numerical properties (compared to the UKF)
  • guarantees positive semi-definiteness of the underlying state covariance

In my post above I asked for the standard Kalman-Filter, because I’d like to be able to switch from standard Kalman-Filter to the UKF or vice-versa depending on whether the process model or the measurement model is linear or non-linear.
Nonetheless, the SR-UKF also needs the matrix R from the QR-decomposition.

1 Like

A classic book on factorization methods and the Kalman filter is Gerald J. Bierman’s Academic Press book from 1977; Factorization Methods for Discrete Sequential Estimation | Gerald J. Bierman (Eds.) | download (1lib.sk) . This was, of course, before the UKF, so there are probably newer books, but perhaps Bierman’s book contains some motivation. He was at the Jet Propulsion Lab in Pasadena at the time.

The UD algorithms from Bierman are the most robust, however, a Cholesky algorithm of the unscented filter is easiest to implement. Has anyone noticed the bug in the unscented filter equations? As documented, it will estimate observable parameters.

Have you found answers to your questions?

Yes, I implemented a non allocating gels! function here KalmanFilters.jl/src/gels.jl at 1353334a50d9efc6b67c060557a0c8742a2da5b0 · JuliaGNSS/KalmanFilters.jl · GitHub

However, I haven’t found a way to use a zero sized B, but maybe that’s fine and wouldn’t have made a difference anyway.

Maybe it should be part of FastLapackInterface.jl.

@MichelJuillard, What do you think?

Absolutely. Thanks for suggesting it. See Add xGELS function · Issue #42 · DynareJulia/FastLapackInterface.jl · GitHub

Thanks for suggesting it

1 Like