ERROR: MethodError: no method matching lu!

Could you please help me to solve the error in my MWE (A,b,p,q are snipped from real code)? The error says the problem is with typeof(A[p,q][1:6,1:6) however its type is SparseMatrixCSC{Float64, Int64}.

using SparseArrays, LinearAlgebra
A = sparse([0.0137346 0 0 0 0 0 0 0 0 -0.00123457 0 0 0 0 0;
            0 0.0137346 0 0 0 0 0 0 0 0 -0.00123457 0 0 0 0;
            0 0 0.0137346 0 0 0 0 0 0 0 0 -0.00123457 0 0 0;
            0 0  0 1.0 0 0 -1.0 0 0 0 0 0 1.0 0 0;
            0 0 0 0 1.0 0 0 -1.0 0 0 0 0 0 1.0 0;
            0 0 0 0 0 1.0 0 0 -1.0 0 0 0 0 0 1.0;
            0 0 0 -1.0 0 0 1.00302 -0.000705766 -0.000705766 0 0 0 0 0 0;
            0 0 0 0 -1.0 0 -0.000705766 1.00302 -0.000705766 0 0 0 0 0 0;
            0 0 0 0 0 -1.0 -0.000705766 -0.000705766 1.00302 0 0 0 0 0 0;
            -0.00123457 0 0 0 0 0 0 0 0 0.00425693 -0.000705766 -0.000705766 0 0 0;
            0 -0.00123457 0 0 0 0 0 0 0 -0.000705766 0.00425693 -0.000705766 0 0 0;
            0 0 -0.00123457 0 0 0 0 0 0 -0.000705766 -0.000705766 0.00425693 0 0 0;
            0 0 0 1.0 0 0 0 0 0 0 0 0 0 0 0;
            0 0 0 0 1.0 0 0 0 0 0 0 0 0 0 0;
            0 0 0 0 0 1.0 0 0 0 0 0 0 0 0 0]);
b = [-1.7632749965554524,4.5258858371825, -2.7626108406269907, 0.0, 0.0, 0.0, -84.69238104647695, 334.2439006136896, -249.55151956721267, 14.008262358676006, -19.104993786251043, 5.096731427574866,
    16329.20635908069, -8031.318664614193, -8297.88769446649];
# p,q are permutation matrices
p =[1,2,3,11,12,10,4,5,6,7,8,9,13,14,15];
q =[1,2,3,11,12,10,13,14,15,7,8,9,4,5,6];

x = A[p,q][1:6,1:6] \ @view( b[p[1:6]] ); # nor .....x = @view( A[p,q][1:6,1:6] ) \ @view( b[p[1:6]] )
ERROR: MethodError: no method matching lu!(::SparseMatrixCSC{Float64, Int64}, ::RowMaximum; check=true)
Closest candidates are:
  lu!(::StridedMatrix{T}, ::RowMaximum; check) where T<:Union{Float32, Float64, ComplexF32, ComplexF64} at julia\stdlib\v1.8\LinearAlgebra\src\lu.jl:80
  lu!(::Union{Hermitian{T, S}, Symmetric{T, S}} where {T, S}, ::Union{NoPivot, RowMaximum}; check) at Julia-1.8.0-beta1\share\julia\stdlib\v1.8\LinearAlgebra\src\lu.jl:89
  lu!(::StridedMatrix{T} where T, ::Union{NoPivot, RowMaximum}; check) at Julia-1.8.0-beta1\share\julia\stdlib\v1.8\LinearAlgebra\src\lu.jl:135

Any help here please?

Your MWE is missing the W=working.
b is not defined.

1 Like

Sorry, it seems it was deleted meanwhile I am organizing the code. I re-added it

I am not an expert on using LinearAlgebra together with sparse matrices, but the problem is the combination of type SparseMatrixCSC together with the @view. The following two versions work and produce the same result:

julia> x = A[p,q][1:6,1:6] \ b[p[1:6]]  #removing the @view
6-element Vector{Float64}:
   128.2117919309576
   -16.895499498950457
  -111.31629243200855
 -3853.923847656095
   999.3164345402253
  2854.6074131158075

julia> x = Matrix(A[p,q][1:6,1:6]) \ @view( b[p[1:6]] ) #removing the sparseness
6-element Vector{Float64}:
   128.21179193095756
   -16.895499498950315
  -111.31629243200855
 -3853.9238476560936
   999.3164345402255
  2854.6074131158075

But you probably did this by purpose to reduce allocations and have better performance. And this is where I am out and we need someone else to give proper advice.

Maybe it’s something suitable for an issue at LinearAlgebra.jl (julia/LinearAlgebra.jl at master · JuliaLang/julia · GitHub) . A short search didn’t brought something obvious up.

1 Like

Thank you for your reply!
Exactly, my aim is to reduce allocations (as it seems to be bottleneck in the code) to have better performance by using:
x = @view( A[p,q][1:6,1:6] ) \ @view( b[p[1:6]] ).
Thanks also for your suggestion.

If this is a MWE then fine. But if you’re actually trying to do stuff with 6x6 matrices you should definitely use something other than sparse matrices (you can copy! the 6x6 submatrix of a sparse matrix into a dense matrix and use that, for example). The memory use of either is tiny but a sparse array will be slower up to some modest size for most nontrivial structures (maybe 30x30? I haven’t done benchmarks) and, depending on the number and location of nonzeros, can be slower even for moderately sized arrays (dimensions in the hundreds, perhaps). Dense matrix operations are just really really good.

1 Like

Thanks for your reply!
Yes, it is just MWE. I have big size of submatrices in real code, and if this @view can work, it will perhaps increase the performance.

I’m getting into the performance weeds here, but this might be worth a read:

I don’t think this does what you intend or hope. It only uses view for the outer [] application. You can see this in

julia> @macroexpand @view( x[y[1:6]] )
:(true && (view)(x, y[1:6]))

So it appears to me you’re doing the worst of both worlds: allocating for the trivial y[1:6] and then using that vector to form a random-access view into x[...]. The only reason I would do this is if you intended to mutate the view and have those changes reflected in x.

Instead, you can use @views to apply to all [] uses in a block. See this comparison:

julia> @macroexpand @views( x[y[1:6]] )
:((Base.maybeview)(x, (Base.maybeview)(y, 1:6)))

julia> foo_view(x,y) = @view( x[y[1:6]] )
foo_view (generic function with 1 method)

julia> foo_views(x,y) = @views( x[y[1:6]] )
foo_views (generic function with 1 method)

julia> x = rand(1:10,10); y = rand(1:10,10);

julia> @btime foo_view($x,$y)
  40.565 ns (1 allocation: 112 bytes)
6-element view(::Vector{Int64}, [4, 6, 4, 3, 6, 1]) with eltype Int64:
 9
 2
 9
 6
 2
 3

julia> @btime foo_views($x,$y)
  17.535 ns (0 allocations: 0 bytes)
6-element view(::Vector{Int64}, [4, 6, 4, 3, 6, 1]) with eltype Int64:
 9
 2
 9
 6
 2
 3

The nested views constructed by using @views for this are allocation-free, but require double-indirection (through 2 layers of views) to reach the array elements, which can be getting somewhat expensive.

Another thing you might consider (you’ll have to benchmark to see whether this is better than the double-view) is b[@view p[1:6]]. This will use a view for the trivial operation of getting a contiguous range of p, then construct a new vector from the b[...] access. This gives you a raw Vector as output. This will use extra memory over the double-view but may make up the difference in faster linear algebra.

1 Like

Thank you very much for your detailed explanations. You draw my attention to the double-view concept and b[@view p[1:6]] (which I will check). In the meanwhile, any idea to make this x = @view( A[p,q][1:6,1:6] ) \ @view( b[p[1:6]] ) works including the two views?

I didn’t totally understand your question but I’ll elaborate and hopefully answer it anyway. Also, see the docs for @view and @views.

You can always use view(a,b...) in place of @view a[b...], @view is just a macro that makes this transformation for you (and also allows the use of begin/end for index ranges like a[begin:end], which otherwise requires firstindex(a,dim)/lastindex(a,dim)). @views does the same thing but applies to every indexing expression within its scope. In other words, the following lines are all equivalent:

@views a[b[c]]
@view a[@view b[c]]
view(a,view(b,c))

Likewise, the following are equivalent:

@views a[b][c]
@view(@view(a[b])[c])
view(view(a,b),c)

And if you wanted to make every indexing operation within an expression a view, just use @views:

x = @views A[p,q][1:6,1:6] \ b[p[1:6]]
1 Like

Thank you very much for your response!
However, using view with \ gives error, for example as below. I need to find a way to use view (for performance) and to avoid the error.

x = @views A[p,q][1:6,1:6] \ b[p[1:6]]
ERROR: MethodError: no method matching lu!(::SparseMatrixCSC{Float64, Int64}, ::RowMaximum; check=true)
Closest candidates are:
  lu!(::StridedMatrix{T}, ::RowMaximum; check) where T<:Union{Float32, Float64, ComplexF32, ComplexF64} at julia\stdlib\v1.8\LinearAlgebra\src\lu.jl:80
  lu!(::Union{Hermitian{T, S}, Symmetric{T, S}} where {T, S}, ::Union{NoPivot, RowMaximum}; check) at Julia-1.8.0-beta1\share\julia\stdlib\v1.8\LinearAlgebra\src\lu.jl:89
  lu!(::StridedMatrix{T} where T, ::Union{NoPivot, RowMaximum}; check) at Julia-1.8.0-beta1\share\julia\stdlib\v1.8\LinearAlgebra\src\lu.jl:135

It seems there are currently issues with lu on views of sparse matrices, which is causing issues with \. You can sidestep the current issue by using a different factorization, for example qr:

x = @views qr(A[p,q][1:6,1:6]) \ b[p[1:6]]

However, even if it were, with what you’re doing I’ll inform you that the view won’t save you anything here because \ or qr will copy it anyway to form the decomposition. To paraphrase, it’s implemented something like

\(A,b) = ldiv!(lu!(copy(A)), copy(b))

You will get an equivalent but equally efficient version of this solve if you materialize these views. Also, note that A[p,q][x,y] == A[p[x],q[y]].

I’ll recommend the following implementation if you care about minimizing copies:

ldiv!(lu(A[@view(p[1:6]),@view(q[1:6])]), b[@view(p[1:6])])

With yet more effort, one could probably copy less. But it won’t impact the runtime significantly except for very small matrices.

Don’t get hung up trying to take lots of views (especially non-contiguous views) when doing linear algebra – especially large linear algebra. Many linear algebraic operations (like A\b) have complexity O(N^3) while materializing a subarray is only O(N^2). Meanwhile, random-access views are very detrimental to the performance of the carefully designed linear algebra routines designed to perform these expensive operations.

1 Like

I’ve posted an issue on GitHub for this.

2 Likes

Using my TraceFuns.jl, I tried to debug this a bit:

julia> @trace x = A[p,q][1:6,1:6] \ b[p[1:6]] Base.:\ lu lu!
0: \(<clipped args>) -- Method \(A::SparseArrays.AbstractSparseMatrixCSC, B::AbstractVecOrMat) in SparseArrays at /usr/share/julia/stdlib/v1.8/SparseArrays/src/linalg.jl:1548
   1: \(<clipped args>) -- Method \(A::Union{Hermitian{ComplexF64, SparseMatrixCSC{ComplexF64, Int64}}, Hermitian{Float64, SparseMatrixCSC{Float64, Int64}}, Symmetric{Float64, SparseMatrixCSC{Float64, Int64}}}, B::Union{Adjoint{<:Any, <:StridedVecOrMat}, Transpose{<:Any, <:StridedVecOrMat}, StridedVecOrMat}) in SuiteSparse.CHOLMOD at /usr/share/julia/stdlib/v1.8/SuiteSparse/src/cholmod.jl:1573
       2: \(SuiteSparse.CHOLMOD.Factor{Float64}
      <more output clipped>

julia> @trace x = A[p,q][1:6,1:6] \ @views(b[p[1:6]]) Base.:\ lu lu!
0: \(<clipped args>) -- Method \(A::SparseArrays.AbstractSparseMatrixCSC, B::AbstractVecOrMat) in SparseArrays at /usr/share/julia/stdlib/v1.8/SparseArrays/src/linalg.jl:1548
   1: \(<clipped args>) -- Method \(A::AbstractMatrix, B::AbstractVecOrMat) in LinearAlgebra at /usr/share/julia/stdlib/v1.8/LinearAlgebra/src/generic.jl:1096
       2: lu(<clipped args>) -- Method lu(A::AbstractMatrix{T}) where T in LinearAlgebra at /usr/share/julia/stdlib/v1.8/LinearAlgebra/src/lu.jl:278
       <more output clipped>

Thus, both calls hit the same method in linalg.jl 1548:

function \(A::AbstractSparseMatrixCSC, B::AbstractVecOrMat)
    require_one_based_indexing(A, B)
    m, n = size(A)
    if m == n
        if istril(A)
            if istriu(A)
                return \(Diagonal(Vector(diag(A))), B)
            else
                return \(LowerTriangular(A), B)
            end
        elseif istriu(A)
            return \(UpperTriangular(A), B)
        end
        if ishermitian(A)
            return \(Hermitian(A), B)
        end
        return \(lu(A), B)
    else
        return \(qr(A), B)
    end
end

and take the hermitian path there. Then, the first call dispatches to

julia> @which Hermitian(A[p,q][1:6,1:6]) \ b[p[1:6]]
\(A::Union{Hermitian{ComplexF64, SparseMatrixCSC{ComplexF64, Int64}}, Hermitian{Float64, SparseMatrixCSC{Float64, Int64}}, Symmetric{Float64, SparseMatrixCSC{Float64, Int64}}}, B::Union{Adjoint{<:Any, <:StridedVecOrMat}, Transpose{<:Any, <:StridedVecOrMat}, StridedVecOrMat}) in SuiteSparse.CHOLMOD at /usr/share/julia/stdlib/v1.8/SuiteSparse/src/cholmod.jl:1573

which completely avoids lu in the following. In contrast, the second call – with @view – falls back to the generic implementation and later fails on lu as described in the issue:

julia> @which Hermitian(A[p,q][1:6,1:6]) \ @view(b[p[1:6]])
\(A::AbstractMatrix, B::AbstractVecOrMat) in LinearAlgebra at /usr/share/julia/stdlib/v1.8/LinearAlgebra/src/generic.jl:1096

The reason that the method from cholmod.jl is no longer matches seems to be:

julia> b[p[1:6]] isa StridedVecOrMat
true

julia> @views(b[p[1:6]]) isa StridedVecOrMat
false

Is this on purpose, i.e., are views not strided?

1 Like

A strided array is one where moving one index in one dimension involves moving a constant amount in memory, where that constant depends only on the dimension you’re moving in.

For example zeros(5,4,3) has a stride of 1 element (8 bytes) in dimension 1, 5 in dimension 2, and 20 in dimension 3. This is a result of Julia’s column-major (more generally, leading-major) array layout. In fact, there is a function strides which returns this information for any strided array (or an error for non-strided arrays).

julia> strides(zeros(5,4,3))
(1, 5, 20)

julia> strides(zeros(5,4)') # transposing reverses stride order
(5, 1)

BLAS (the set of linear algebra routines used by most languages and systems, including Julia) is designed to work on strided vectors and matrices.

Note that taking a view with evenly-spaced samples in each dimension still results in a strided array

julia> strides( @view zeros(5,4,3)[1:2:5,2:3,:] )
(2, 5, 20) # strides[1] is now 2 because we skip every other element in dimension 1

julia> strides( @view zeros(5,4,3)[1:2:5,3:-1:2,:] )
(2, -5, 20) # reversing a dimension negates its stride

For this reason, “strided” views of arrays have virtually no performance cost. However, not all views are strided:

julia> strides( @view zeros(5,4,3)[[1,5,3],2:3,:] )
ERROR: ArgumentError: strides is invalid for SubArrays with indices of type Vector{Int64}

In this example, we have indexed the array using [1,5,3] indices in one dimension (it would also fail for [1,3,5] because, even though this is equivalent to the “strided” 1:2:5, Julia cannot know that from typeof([1,3,5])). There is no fixed set of strides that can be used for this array.

So whether a view is strided depends on what the view indices look like. If the base array is strided and all view indices look like a:c, a:b:c, or : (and maybe something I’m forgetting), then the view will be strided. Views of views (of views of views…) that follow this pattern all the way down will also be strided. However, access via generic containers like Vector{Int} are, in general, not strided and Julia never considers them strided.

This is why I encourage the use of views for “regular” subarrays (with stridable indices) but am less likely to recommend them for “random access” subarrays. Random access breaks stridedness and results in a lookup at each step. Many low-level routines (like BLAS) do not support this and so you’ll end up in slow, generic fallbacks (or in some cases, like here, hit an error because we didn’t deal with this corner case properly).

Sparse arrays are not strided. They have a more intricate memory layout. But there are special libraries for dealing with them, too. But those do not play well with Julia’s views.

1 Like