LinearAlgebra.BLAS does not accept non-dense Julia Matrix type?

LinearAlgebra.BLAS.gbmv!('N', 100, 1, 1, 1.0, A_POC, ones(100), 0.0, ones(100))

throws error

julia> LinearAlgebra.BLAS.gbmv!('N', 100, 1, 1, 1.0, (A_POC), ones(100), 0.0, on
es(100))
ERROR: LoadError: MethodError: no method matching strides(::Tridiagonal{Float64,
 Vector{Float64}})
Closest candidates are:
  strides(::LoopVectorization.LowDimArray) at C:\Users\Jianghui\.julia\packages\
LoopVectorization\O8WW6\src\broadcast.jl:10
  strides(::SubArray) at subarray.jl:350
  strides(::ArrayInterface.AbstractArray2) at C:\Users\Jianghui\.julia\packages\
ArrayInterface\VEFPY\src\ArrayInterface.jl:810
  ...
Stacktrace:
 [1] stride(A::Tridiagonal{Float64, Vector{Float64}}, k::Int64)
   @ Base .\abstractarray.jl:501
 [2] stride1(x::Tridiagonal{Float64, Vector{Float64}})
   @ LinearAlgebra ~\AppData\Local\Programs\Julia-1.6.0\share\julia\stdlib\v1.6\
LinearAlgebra\src\LinearAlgebra.jl:197
 [3] _chkstride1
   @ ~\AppData\Local\Programs\Julia-1.6.0\share\julia\stdlib\v1.6\LinearAlgebra\
src\LinearAlgebra.jl:203 [inlined]
 [4] chkstride1
   @ ~\AppData\Local\Programs\Julia-1.6.0\share\julia\stdlib\v1.6\LinearAlgebra\
src\LinearAlgebra.jl:201 [inlined]
 [5] gbmv!(trans::Char, m::Int64, kl::Int64, ku::Int64, alpha::Float64, A::Tridi
agonal{Float64, Vector{Float64}}, x::Vector{Float64}, beta::Float64, y::Vector{F
loat64})
   @ LinearAlgebra.BLAS ~\AppData\Local\Programs\Julia-1.6.0\share\julia\stdlib\
v1.6\LinearAlgebra\src\blas.jl:783
 [6] top-level scope
   @ ~\Documents\Manuscripts\SBB\v12\main_fvcf.jl:29
in expression starting at c:\Users\Jianghui\Documents\Manuscripts\SBB\v12\main_f
vcf.jl:29

when A is a tridiagonal matrix

julia> A_POC
100ร—100 Tridiagonal{Float64, Vector{Float64}}:
 -4736.81   2509.53       โ‹…    โ€ฆ    โ‹…           โ‹…           โ‹…
  4286.01  -6265.57   1979.56       โ‹…           โ‹…           โ‹…
      โ‹…     3614.68  -5169.42       โ‹…           โ‹…           โ‹…
      โ‹…         โ‹…     3056.74       โ‹…           โ‹…           โ‹…
      โ‹…         โ‹…         โ‹…         โ‹…           โ‹…           โ‹…
      โ‹…         โ‹…         โ‹…    โ€ฆ    โ‹…           โ‹…           โ‹…
      โ‹…         โ‹…         โ‹…         โ‹…           โ‹…           โ‹…
      โ‹…         โ‹…         โ‹…         โ‹…           โ‹…           โ‹…
      โ‹…         โ‹…         โ‹…         โ‹…           โ‹…           โ‹…
      โ‹…         โ‹…         โ‹…         โ‹…           โ‹…           โ‹…
     โ‹ฎ                         โ‹ฑ
      โ‹…         โ‹…         โ‹…         โ‹…           โ‹…           โ‹…
      โ‹…         โ‹…         โ‹…         โ‹…           โ‹…           โ‹…
      โ‹…         โ‹…         โ‹…         โ‹…           โ‹…           โ‹…
      โ‹…         โ‹…         โ‹…         โ‹…           โ‹…           โ‹…
      โ‹…         โ‹…         โ‹…    โ€ฆ    โ‹…           โ‹…           โ‹…
      โ‹…         โ‹…         โ‹…        0.0          โ‹…           โ‹…
      โ‹…         โ‹…         โ‹…       -0.0712289   0.0          โ‹…
      โ‹…         โ‹…         โ‹…        0.061917   -0.061917    0.
      โ‹…         โ‹…         โ‹…         โ‹…          0.0537804  -0.0537804

but it works after converting A_POC to a dense Matrix

LinearAlgebra.BLAS.gbmv!('N', 100, 1, 1, 1.0, Array(A_POC), ones(100), 0.0, ones(100))

Does this mean the BLAS functions only accept dense matrices?

If you directly call the LAPACK functions, the answer is likely yes.

Is there a good reason why youโ€™re doing that though? There is a five-argument mul! which is what you probably want to use instead. It automatically should dispatch to the best available method for the given input types (including Tridiagonal):

mul!(C, A, B, ฮฑ, ฮฒ) -> C


  Combined inplace matrix-matrix or matrix-vector multiply-add A B ฮฑ + C ฮฒ. The result is stored
  in C by overwriting it. Note that C must not be aliased with either A or B.

Demonstration:

julia> A = rand(100,100); b = rand(100); c = rand(100); ฮฑ = rand(); ฮฒ = rand();

julia> T = Tridiagonal(A);

julia> @which mul!(c, T, b, ฮฑ, ฮฒ) # dispatches to `mul!` in bidiag.jl
mul!(C::AbstractVector{T} where T, A::Union{Bidiagonal, SymTridiagonal, Tridiagonal}, B::AbstractVector{T} where T, alpha::Number, beta::Number) in LinearAlgebra at /Applications/Julia-1.6.app/Contents/Resources/julia/share/julia/stdlib/v1.6/LinearAlgebra/src/bidiag.jl:398

julia> @which mul!(c, A, b, ฮฑ, ฮฒ) # dispatches to `mul!` in generic matmul.jl
mul!(y::StridedVector{T}, A::StridedVecOrMat{T}, x::StridedVector{T}, alpha::Number, beta::Number) where T<:Union{Float32, Float64, ComplexF32, ComplexF64} in LinearAlgebra at /Applications/Julia-1.6.app/Contents/Resources/julia/share/julia/stdlib/v1.6/LinearAlgebra/src/matmul.jl:66

julia> @btime mul!($c, $A, $b, $ฮฑ, $ฮฒ);
  2.322 ฮผs (0 allocations: 0 bytes)

julia> @btime mul!($c, $T, $b, $ฮฑ, $ฮฒ);
  155.034 ns (1 allocation: 32 bytes)

I have to note, though, that for the dense matrix, an explicit call into LinearAlgebra.BLAS.gbmv! seems faster (donโ€™t know why that is):

julia> @btime LinearAlgebra.BLAS.gbmv!('N', 100, 1, 1, $ฮฑ, $A, $b, $ฮฒ, $c);
  1.044 ฮผs (0 allocations: 0 bytes)
2 Likes

Inspecting the mul! method in the generic case, it doesnโ€™t dispatch to gbmv but to gemv

@inline mul!(y::StridedVector{T}, A::StridedVecOrMat{T}, x::StridedVector{T},
             alpha::Number, beta::Number) where {T<:BlasFloat} =
    gemv!(y, 'N', A, x, alpha, beta)

But some LAPACK expert can probably comment on whether this is good (for some reason) or a โ€œbugโ€.

UPDATE: Ah, of course, we generally can not assume that kl == ku == 1. So the gbmv method has more information than gemv to operate on. Importantly, the Julia implementation for the Tridiagonal case is most efficient for this matrix structure. So, as long as one indicates the structure through the type, one โ€œdoesnโ€™t have to careโ€ and automatically gets the best speed (as it should be :grinning_face_with_smiling_eyes: ).

1 Like

I was just curious if the BLAS version is faster than mul! I suppose gbmv must exist for a reason of better matrix vector multiplication than gemv?