I’ve been searching for a while now and I cannot find any information on an efficient way to solve sparse linear systems using Julia.

It seems like no lufact method is implemented for sparse matrices as the MethodError returns :
" no method matching lufact!(::SparseMatrixCSC{Float64,Int64}, ::Type{Val{true}}) "

So I found that we can use IterativeSolvers to solve the systems but their computation speed is really slow (compared to Matlab mldivide).

So I downloaded the non-pro version of Julia and it is now working !
Sadly, the A\b operator is slightly less perfomant in Julia than in MatLab (in my case) but I guess this is why MatLab is an expensive product.

No, it’s because you’re using OpenBLAS instead of MKL. MATLAB didn’t even write the core of \, it’s just calling the same code that Julia can call if you build it with MKL.

Are you sure about that?
Have a look in MATLAB’s mldivide() (Specifically at the Algorithms Section).
It does many tests before employing the correct decomposition in order to solve the system.
Are you saying all those test and decisions are built in to BLAS (Be it OpenBLAS or Intel MKL)?

If it doesn’t (Namely, the BLAS Library doesn’t do that built in), Does Julia?

The OP’s example is a sparse matrix, so IIUC both Julia and MATLAB use SuiteSparse. BLAS is only used for part of the work here, sometimes other tasks dominate. SuiteSparse includes some special interfaces for MATLAB which could, e.g. select preferred algorithms or parameters, to get extra performance.

First of all, the matrix type checks are obviously not "the core of \". The core of a linear solver is not a for loop which checks of the matrix is diagonal, it’s the implementations of things like multithreaded pivoted (sparse) QR, i.e. the algorithms which actually do the decomposition and where >99% of the call is spent.

As for how/that this check is done, have you checked the code? It’s all online and easy to find:

julia> @which rand(4,4) \ rand(4)
\(A::AbstractArray{T,2} where T, B::Union{AbstractArray{T,1}, AbstractArray{T,2}} where T) in Base.LinAlg at linalg\generic.jl:820
# this sent me to the v0.6.2 version of course since I was using v0.6.2

You’re talking about just a few simple lines of code. Here’s the whole code for doing the pre-checks it in the dense case, including its documentation and subroutines:

and the sparse case:

Therefore the matrix type check, the part of the code that being is ascribed to MATLAB as some unique core piece of the algorithm, is actually cheap, tiny, and trivial; so obviously that’s not "the core of \" (in fact, I would say that reconstructing this meta-algorithm part would be a good homework problem to give to an undergraduate… definitely adding it to my workshop exercises…). From the code you can see exactly the similarities and differences to MATLAB’s published chart, and put in a PR if you wanted to add more branches. But more importantly, it shows Julia is doing exactly the same type of meta-algorithm it claims to be doing in its help and documentation, in order to “select preferred algorithms or parameters, to get extra performance.”