I am excited to be able to announce the revived VML.jl as
This package provides easy access to the vectorized versions of common math function from the Intel Vector Math Library, which is part of Intel MKL. As such, it currently requires that either the MKL.jl package has been installed, or standalone MKL from the Intel website.
These functions are often much faster than broadcasting the Base functions, as benchmarked by @Amin_Yahyaabadi.
In the future I hope to use the new artifact system to make the system handle the binaries itself and in the process learn more exciting things.
We are happy to take any comments or suggestions.
I think it’s irresponsible to write the following in the README:
To use IntelVectorMath.jl, you must have the shared libraries of the Intel Vector Math Library available on your system. The easiest option is to use MKL.jl via
julia> ] add https://github.com/JuliaComputing/MKL.jl.git
Installing MKL.jl will cause the user’s sysimage to be deleted and rebuilt with MKL instead of OpenBLAS, so if a curious user just wants to give your package a spin and runs the above snippet without going to the MKL.jl github page and understanding what it does, they could break their julia installation and run into some rather mysterious bugs.
I opened a PR that would change this, but I figured I should post here as well for the benefit of users in the meantime.
Thank for noticing, I think we had a warning to this effect at some point, but it got removed in subsequent rewritings.
I have clarified the install option in more detail.
Unfortunately, this package is currently not ready for a quick spin due to the external dependency. Once I understand the artifact/ build system I will make changes, so that this package can just be
added ready for use, but I am still new to more exciting thing than Matlab.
I updated the master branch, and now it includes the benchmark result:
This package now uses BinaryProvider and as such no longer requires external dependency management. It can now be taken for a spin just
Off-topic: when I first saw MKL, it has a limited license in which you cannot use for commercial use. I ended just neglecting it. Now, I see that I can indeed use it as I want. To spare me the trouble, please tell me: does it make a significant change for small matrices?
These charts are from the performance section of the repository for small vectors. Refer to the repository for more charts. Even for a vector of dimension 20, you can get 2X speed up. Offcures for larger sizes the benefit a lot more.