I recently updated the MKLSparse.jl package for 0.5 and 0.6.
The most useful feature of MKLSparse is likely the ability to seamlessly accelerate sparse matrix vector multiplications (which are the main workhorse in iterative solvers). Using a representative matrix for benchmarking I get the following timings
julia> @time for i in 1:1000 A_mul_B!(c,K,b) end; 2.901099 seconds (18.45 k allocations: 994.534 KiB) julia> using MKLSparse julia> @time for i in 1:1000 A_mul_B!(c,K,b) end; 0.877888 seconds (31.31 k allocations: 1.641 MiB)
where we can see that performance is greatly increased by just loading
MKLSparse (results will vary depending on the system this is run on).
A bonus with the new version of MKLSparse is that there is no longer a need to build Julia with MKL to use it. Instead, it is enough to have MKL installed and the paths correctly set for the package to work.
While the DSS (Direct Sparse Solver) interface is not yet wrapped, the package Pardiso.jl can instead be used to solve general sparse systems using MKL.