LU decomposition in Julia

Trying to rewrite the lu_nopivot from this answer matrix - Perform LU decomposition without pivoting in MATLAB - Stack Overflow into JULIA and use only one loop.
The julia code I wrote

using LinearAlgebra
function lu_nopivot(A)
    n = size(A, 1)
    L = Matrix{eltype(A)}(I, n, n)
    U = copy(A)
    for k = 1:n
        L[k+1:n,k] = U[k+1:n,k] / U[k,k]
        U[k+1:n,:] = U[k+1:n,:] - L[k+1:n,k]*U[k,:]  
    end
    return L, U
end

But calling the function
L, U = lu_nopivot(A) gives an error MethodError: no method matching *(::Vector{Float64}, ::Vector{Float64}) on L[k+1:n,k]*U[k,:] Tried the matlab version and it worked fine. What could be the reason that make this failed?

You want .* for broadcasted multiplication or dot for a dot product.

That said, you could just call lu(A, NoPivot())

1 Like

Think I am looking to multiply two matrices here, isn’t * the way to go?

The things you are multiplying aren’t matrices, (based on the error saying you can’t use * on 2 Vector{Float64}s). You might be running an old version of the code, what happens if you restart your repl?

You want a dot product here.

julia> using LinearAlgebra

julia> x = [1.0; 2.0; 3.0]; y = [0.0; 1.0; 2.0];

julia> x*y
ERROR: MethodError: no method matching *(::Vector{Float64}, ::Vector{Float64})
Closest candidates are:
  *(::Any, ::Any, ::Any, ::Any...) at ~/packages/julia-1.7.0/share/julia/base/operators.jl:655
  *(::StridedMatrix{T}, ::StridedVector{S}) where {T<:Union{Float32, Float64, ComplexF32, ComplexF64}, S<:Real} at ~/packages/julia-1.7.0/share/julia/stdlib/v1.7/LinearAlgebra/src/matmul.jl:44
  *(::StridedVecOrMat, ::Adjoint{<:Any, <:LinearAlgebra.LQPackedQ}) at ~/packages/julia-1.7.0/share/julia/stdlib/v1.7/LinearAlgebra/src/lq.jl:266
  ...
Stacktrace:
 [1] top-level scope
   @ REPL[15]:1

julia> dot(x,y)
8.0

x*y works in Matlab because Matlab doesn’t distinguish scalars, vectors, and matrices (e.g. a Matlab scalar is really a 1 x 1 complex matrix). Julia is more particular about types in order to achieve efficiency and generality.

EDIT: I see Oscar_Smith pointed to dot above.

I think the confusion here is that, even though this is selecting the k-th row of U, the result of a 1d slice in Julia is always treated as a “column vector” (i.e. a 1d slice is a 1d array, no matter what dimension is sliced).

So, I think what you want is:

U[k+1:n,:] = U[k+1:n,:] - L[k+1:n,k]*transpose(U[k,:])

to make sure that U[k,:] is treated as a row vector for your rank-1 update here.

See also Unintuitive Julia result: selecting a row from a matrix and Problem: extracting a row from an array, returns a column and Trouble Understanding Slicing and RFC: Drop dimensions indexed by scalars by mbauman · Pull Request #13612 · JuliaLang/julia · GitHub (which first implemented this) and Taking vector transposes seriously · Issue #4774 · JuliaLang/julia · GitHub (where it was proposed as “APL-style indexing”).

1 Like

I would again like to point out that lu(A, NoPivot()) will be way way way faster.
For a 1000x1000 matrix, lu_nopivot takes 2.4 seconds, while lu(A, NoPivot()) takes 185 ms (12x faster).

Also, lu allocates 7mb vs lu_nopivot which allocates 11 GB of ram (over 1000x less)

1 Like

With this fix, your code works:

julia> A = rand(3,3)
3Ă—3 Matrix{Float64}:
 0.244342   0.6094    0.281167
 0.0340127  0.130047  0.941087
 0.7212     0.69946   0.656365

julia> L, U = lu_nopivot(A);
([1.0 0.0 0.0; 0.1392011090670886 1.0 0.0; 2.9515958101582 -24.309759573913595 1.0], [0.24434249240692318 0.609400324922777 0.2811674033872621; 0.0 0.04521819259713014 0.9019484309838302; 0.0 0.0 21.75262222865455])

julia> L
3Ă—3 Matrix{Float64}:
 1.0         0.0     0.0
 0.139201    1.0     0.0
 2.9516    -24.3098  1.0

julia> U
3Ă—3 Matrix{Float64}:
 0.244342  0.6094      0.281167
 0.0       0.0452182   0.901948
 0.0       0.0        21.7526

julia> F = lu(A, NoPivot())
LU{Float64, Matrix{Float64}}
L factor:
3Ă—3 Matrix{Float64}:
 1.0         0.0     0.0
 0.139201    1.0     0.0
 2.9516    -24.3098  1.0
U factor:
3Ă—3 Matrix{Float64}:
 0.244342  0.6094      0.281167
 0.0       0.0452182   0.901948
 0.0       0.0        21.7526

Definitely, but anyone who is doing LU without pivoting (which is numerically unstable) doesn’t care about performance anyway — it’s only used for learning exercises.