Computational Speed

Julia uses OpenBLAS, and MATLAB uses MKL. The easiest way to use Julia with MKL is probably:

EDIT:
BLAS is multithreaded.
If BLAS is using 8 threads, CPU time will elapse at about 8 CPU seconds per second.

Therefore, be sure to actually time both in the same way.
For example, simply

@time begin
   MM=rand(Float64,10000,10000)
   mmm=inv(MM)
end

I have used your method in Julia


43.088418 seconds (13 allocations: 1.495 GiB, 0.42% gc time)

Matlab


20.122736 seconds.

If I use MKL, then


 43.684950 seconds (13 allocations: 1.495 GiB, 1.06% gc time)


using CPUTime
using LinearAlgebra

BLAS.vendor()
:mkl

#CPUtic()
@time begin
MM=rand(Float64,10000,10000)
mmm=inv(MM)
end
#CPUtoc()

How many physical cores does your CPU have?
Often, that will be half of Sys.CPU_THREADS, but sometimes the two will be equal.

Try BLAS.set_num_threads(NUMBER_PHYSICAL_CORES). When the numbers are not equal, this will be faster than using Sys.CPU_THREADS threads.

Almost 100% of the time is being spent in BLAS/LAPACK, so if both are using MKL, I don’t know why their ought to be a difference. Unless they’re defaulting to different BLAS calls under the hood.

Also, I’d just like to confirm that you restarted your Julia session after building MKL.jl?

1 Like

If you use MKL in both cases, you can get a more realistic result. At least I find this:

With MATLAB:

>> tic()
MM = rand(10000,10000);
mmm = inv(MM);
toc()
Elapsed time is 15.251977 seconds.

With JuliaPro 0.6.0 (MKL):

function testInv(n)
    MM = rand(Float64,n,n)
   mmm = inv(MM)
end

julia> @time testInv(10000);
 13.208193 seconds (21 allocations: 2.285 GiB, 1.47% gc time)
1 Like

IF you use Juno?

#CPUtic()
function testInv(n)
    MM = rand(Float64,n,n)
   mmm = inv(MM)
end

 @time testInv(10000);

47.049461 seconds (2.82 k allocations: 1.495 GiB, 0.36% gc time)

Matlab


tic()

MM=rand(10000,10000);
mmm=inv(MM);

toc()
Elapsed time is 23.070654 seconds.

Jupyter


 43.998483 seconds (1.19 M allocations: 1.552 GiB, 0.19% gc time)

Directly in the REPL, with OpenBLAS (Julia 1.1.0) I get:

 30.744067 seconds (13 allocations: 1.495 GiB, 0.19% gc time)

vs. MKL (JuliaPro 0.6.0):

 13.208193 seconds (21 allocations: 2.285 GiB, 1.47% gc time)

I am using Julia 1.1.0 in JUNO.

Therefore, If I want to improve the computational speed, I have to use JuliaPro…???

FYI, benchmarking the inversion of a large matrix only depends on the BLAS library that is linked; it basically has nothing to do with the language. See also BLAS: Julia slower than R

1 Like

Sorry for that, may I ask why we donot use MKL as the default library ?

It is not open source.

1 Like

Free Intel Performance Librariesfor Everyone??

Free but not open source.

Warning: both Julia and NumPy are linked with MKL, which may cause conflicts and crashes (#433).

I think it is time for a FAQ entry, so I made one:

5 Likes

HI @Tamas_Papp, thank you!

1 Like

Gratis, but not free. In RMS’ words, free as in free beer, not free as in freedom.

Apart from politics, it is apparently nontrivial to sort out licensing in a way that permits to distribute compiled julia binaries with both MKL and GMP. Microsoft seems to believe that it is legally possible under some circumstances (cf existence of MRAN), and they presumably have competent lawyers who figured this out.

It looks like they are re-licensing MKL as “Microsoft R Services MKL”. I assume this involves a deal between MS and Intel, as legally you enter into a license agreement with MS if using this product (even the MKL part).

I understand the considerations for speed, but I think that using a FOSS library by default for Julia is the right choice. I would be uncomfortable with using a black box (a very nice, well-tested black box, but a black box nevertheless) for research. Especially since if someone really wants to do it, installing & using MKL is always an option.

IMO the discrepancy in naive benchmarks between various languages is an orthogonal issue and should be addressed by user education.

Afaik the main problem is the possibility of running afoul of the GPL when distributing binaries that form a derived work (MRAN, Julia) of GPL code (R, GMP) and unfree code (MKL). I don’t exactly see how a deal between MS and intel helps with that.

IANAL, but it is definitely plausible that this is OK, in the same way that some linux distros dare to ship binaries for zfs; and it is at least not totally implausible that this is problematic, and some distros don’t ship zfs binaries (i.e. require zfs users to compile at home, as currently necessary when using MKL with julia). I’ll assume that whatever MS is doing is legally sound, but can’t guess at what special circumstances are relevant to replicating that feat.

Agree!

1 Like

Another problem is that if Julia links to MKL with the 64-bit BLAS API then any packages that link to libraries using the ordinary MKL API (32-bit BLAS) will crash. For example, any packages calling Python numpy routines, such as PyPlot, will crash with an MKL numpy (see e.g. https://github.com/JuliaPy/PyCall.jl/issues/443).

Of course, this is related to MKL being non free/open-source — otherwise, we could recompile MKL with renamed symbols to avoid library conflicts, like we do with OpenBLAS. (We could try to do it at the binary level with objcopy ala https://github.com/JuliaLang/julia/pull/8734 … I’m not sure if that would be permissible according to the MKL license?)

4 Likes