In the LinearAlgebra
package, some provision is made for inplace updates to triangular matrices via the functions lmul!, rmul!, ldiv!
and rdiv!
.
To be clear, this post concerns only those cases in which both arguments to these functions are triangular. In cases where the to-be-updated destination argument is a plain rectangular matrix and the other argument is triangular, lmul!, rmul!, ldiv!
and rdiv!
are well behaved as far as I know. This post is only about the case where the to-be-updated destinations are also triangular.
The issues that Iβd like to point out, include:
- Aliasing, i.e. modification of the parent of the triangular destination, outside of the designated triangle.
- Missing methods for some combinations of triangular types.
- Some slow methods implemented in
bidiag.jl
. - Some avoidable cases where expressions of the form,
applicable(ldiv!,L,R) && ldiv!(L,R)
crash.
After discussing these problems, I propose a solution in the conclusion.
Aliasing
Here is an example where aliasing occurs:
julia> VERSION
v"1.1.1"
julia> using LinearAlgebra
julia> A = [3.0 3 3; 2 3 3; 2 2 3]
3Γ3 Array{Float64,2}:
3.0 3.0 3.0
2.0 3.0 3.0
2.0 2.0 3.0
julia> L = UnitLowerTriangular(A)
3Γ3 UnitLowerTriangular{Float64,Array{Float64,2}}:
1.0 β
β
2.0 1.0 β
2.0 2.0 1.0
julia> U = UpperTriangular(A)
3Γ3 UpperTriangular{Float64,Array{Float64,2}}:
3.0 3.0 3.0
β
3.0 3.0
β
β
3.0
Now L
and U
share the same parent, A
, with the diagonal belonging to U
.
This example may be somewhat contrived, but it does sometimes occur in highly optimized code, for example in lu(A).factors
where both triangular factors are returned into the same rectangular matrix by BLAS. (Note, however that lu(A).L and lu(A).U both return independent copies of the respective triangles.)
We can update L
and U
independently, without clobbering the other, for example:
-
U .+= 1
updatesU
(and therefore the upper part ofA
), without affectingL
-
L[3,1] = 0
updatesL
(and the lower part ofA
), without affectingU
- But,
L .+= 1
crashes, because it would violate the unit diagonal constraint onL
. Disallowing this update also avoids clobbering die diagonal that belongs toU
.
Now letβs show that lmul!
could clobber U
.
julia> L2 = LowerTriangular(randn(3,3))
julia> lmul!(L2',U) # U β L2' * U, inplace.
3Γ3 UpperTriangular{Float64,Array{Float64,2}}:
3.5508 4.63504 3.54384
β
-1.23252 1.75547
β
β
-2.53018
This inplace operation is possible, because L2' * U
is still upper triangular. However, the fast implementation of this update, as coded in triangular.jl
, ends up clobbering L
, because it calls triu!(parent(U)
which zeros the lower triangle, before invoking BLAS.trmm!
to do the multiplication:
julia> L
3Γ3 UnitLowerTriangular{Float64,Array{Float64,2}}:
1.0 β
β
0.0 1.0 β
0.0 0.0 1.0
In summary of this part, I donβt think this behaviour is necessarily bad. But users should be aware that in certain cases, lmul!, rmul!, ldiv!
and rdiv!
could modify the whole parent, not just the intended triangle.
Missing methods
There are many triangular matrix types. Even if we confine ourselves just to triangular Float64
matrices, there are 18 combinations:
{adjoint,transpose,plain} x {upper,lower} x {unit diagonal,full diagonal}
Since the functions we are discussing take 2 arguments, there would seem to be 18Β² candicates for methods to implement for each of the 4 updating functions. However, if we would want to limit ourselves to cases where parent clobbering can be avoided, then the following restrictions apply:
- Multiplications and divisions of triangular arguments give triangular results only if the
uplo
of the arguments agree. - The result is unit diagonal only if both arguments have unit diagonals.
Conversely, if we decide to allow parent modification, then all 18Β² combinations should be allowed. Note that this free-for-all would include cases like lmul!(L2,U)
which would (subject to size restictions) do the update: parent(U) β L2 * U
and return the rectangular result in the parent matrix.
As it stands now, although parent modification seems to be allowed, only a fraction of the combinations are currently implemented. For example, neither of lmul!, rmul!
are implemented for (UpperTriangular,Uppertriangular)
, but rdiv!
and ldiv!
do have such methods.
There are more details in issue 32212.
Slow methods
Methods that implement the functions of interest for triangular arguments are currently to be found in two different source files:
- In
triangular.jl
the methods are fast, but modify the whole parent if the destination matrix is triangular. - In
bidiag.jl
the methods are at least an order of magnitude slower for large matrices. For an example, see issue 32212. Although I have not checked thoroughly, thebidiag.jl
methods seem to avoid parent modification.
For example for (UpperTriangular,UpperTriangular)
, ldiv!
is implemented by bidiag.jl
, while rdiv!
is implemented by triangular.jl
.
Unexpected failure
Try this:
julia> L = LowerTriangular(rand(3,3));U=UpperTriangular(randn(3,3))
julia> U = applicable(ldiv!,L,U) ? ldiv!(L,U) : L \ U
ERROR: ArgumentError: cannot set index in the lower triangular part of an UpperTriangular matrix to a nonzero value
This crashes in bidiag.jl
. The problem is that the result of L\U
is not triangular and cannot be stored in the triangular destination U
.
This can be fixed in one of two ways:
- If Julia decides to consistently disallow parent aliasing in triangular updates, then ideally methods should not claim type combinations where aliasing is unavoidable.
- If Julia decides to consistently allow parent modifiction for triangular updates, then methods for all such updates should be implemented.
Conclusion
In summary of this post, I would argue that the most important question is whether parent modification outside of the designated triangle should, or should not be allowed for upates to triangular destinations. This should then be implemented consistently everywhere.
One could also decide to have your cake and eat it:
- One could restrict the existing functions,
lmul!, rmul!, ldiv!
andrdiv!
to disallow parent modification outside of the designated triangle. In fact, it would be by far the easiest to just not let these methods do any updates at all to triangular destinations. This would then also exclude the problematic (slow and crashing) implementations inbidiag.jl
. - Then one could additionally implement the functions, say
lmul!!, rmul!!, ldiv!!
andrdiv!!
, which do allow such modifications. This will make it much easier to use fast, BLAS-based upates (trmm!
andtrsm!
) for perhaps all combinations of BLAS-friendly triangular arguments.
I will try my hand at implementing the proposed !!
functions and perhaps issue a PR if it works out.