Qr decomposition matrix type

I just want to compute a qr decomposition and pass the Q matrix to the log function

using LinearAlgebra
D = rand(Float64, 3, 3)
Q, R = qr(D)
W  = log(Q)  # Error

Why is Q not a simple regular matrix type? How can I convert it to a regular matrix?

You can convert Q to a Matrix{Float64} using

Q = convert(Matrix{Float64}, Q)

As to why Q is not of type Matrix{Float64}, I don’t know, that’s something that I don’t understand yet.

Note that in your decomposition, most likely Q will have negative values. If you want to apply log to every entry of Q, you can use . notation (dot notation)

W = log.(Q)

However, this will likely throw an error since, as I said before, Q will likely have negative entries.

1 Like

First of all, on a general note, if you wrap your code in a triple tick block ``` <code goes here> ``` it will be displayed much more nicely.

To your actual questions:

The reason why many things in Julia are stored in special types (not just Q of a QR decomposition) is generally two fold:

  • Data structuring: we can store the object much more efficiently while still make them behave as if they were, say, an AbstractMatrix. This is known as duck typing. As long as it quacks and walks like a duck, from an outsiders point of view it is a duck. Example: Diagonal(D). Clearly, it would be a waste of memory to store all the zero matrix elements for a diagonal matrix. Hence, internally, we only store the diagonal entries as a vector (see here that this is indeed what Julia does.). However, the Diagonal matrix will still behave like a regular matrix in many cases.

  • Method dispatch: special function implementations. Julia always selects a specific method of a function based on all argument types (multiple dispatch). We can utilize this to implement special, fast algorithms for certain matrix types. Example: say, I’d want to calculate the determinant of a Diagonal matrix. Naively, I could treat it like any other Matrix and use something like the Rule of Sarrus (assuming 3x3 here for the sake of the argument). Obviously, it would be much more efficient to only multiply the non-zero diagonal elements. This is precisely what Julia does.

Turning to your specific case, we can check what that QRCompactWY object really is by asking for help in the REPL, ? QRCompactWY, or consulting the docs. Personally, I don’t know anything about this format (frankly, I also don’t care), but there I’d find all the implementation details. In many cases, a QRCompactWY will just behave like a matrix. For example, I can multiply it with a vector, which will call a non-generic implementation of * in qr.jl (check @which Q * rand(3)).

If you feel that log(Q::QRCompactWY) should work as well, than perhaps there is a special implementation or a fallback method missing in stdlib/LinearAlgebra/qr.jl. You could consider opening an issue or even better a pull request in that case.

Note that you can always access the fallback by changing Qs type (and hence its data structure) first, by calling log(Matrix(Q)).


Note that in addition to the explanations, log should work for all square AbstractMatrix values, in the worst case by using the fallback

log(A::AbstractMatrix) = log(Matrix(A))

(this assumes you want the matrix, and not the elementwise, logarithm).

Consider opening an issue, or making a small pull request to fix it.


thanks crstnbr, your explanation covered most of my frustrations with getting started with Julia. Doesn’t duck typing cause reductions in performance then?

Why would you think that? For most cases, it’s quite the opposite.


The algorithm for QR decomposition is by Householder reflections. For an n \times k matrix A, k \leq n, the main algorithm outputs an n \times k upper triangular matrix R and the set of k Householder vectors of dimensions n in O(nk^2) FLOPS. Converting the k Householder vectors to the n \times n matrix Q is an auxiliary algorithm requiring O(n^2k) FLOPS.

For example, if k=1, these are O(n) and O(n^2) FLOPS respectively; thus, actually computing the Q matrix is by far the most expensive step in this case. As a result, one avoids computing the actual matrix Q when possible. For example, it is possible to solve least-squares problems without actually computing Q, using only O(nk^2) FLOPS and not the full O(n^2k) FLOPS.