Hi everyone

I have a Vega RX 64 and I would like to speed up eigenvalues decomposition on it.

Does anybody know a (simple) solution with Julia ?

Hi everyone

I have a Vega RX 64 and I would like to speed up eigenvalues decomposition on it.

Does anybody know a (simple) solution with Julia ?

Short answer: AFAICT there isnâ€™t even a not-simple solution.

To use a GPU effectively for dense eigensystem computations is quite difficult. The functionality was only recently added to the CUDA libraries (useless to you) and is conspicuously absent from the prominent projects using OpenCL (e.g. ArrayFire). There was some work in the clMagma project, but that seems to have been abandoned well before recent upgrades to the AMD software stack.

You say â€śdecompositionâ€ť, so I assume you mean for dense matrices. For a few eigenvalues/vectors of sparse matrices there is more hope, but itâ€™s still not simple.

Complex numbers on GPU

(Look at the develop-upstream branch.)

They have unit tests for HIP, but Iâ€™m not sure what functionality is currently supported. The last PRs were very recent. HIP is cross platform, compiling through hcc for AMD, and CUDA for NVidia. Thereâ€™re also pure CUDA unit tests in eigen.

So it may be a matter of creating a few shared libraries and `ccall`

ing them.

Gregory Stoner of the ROCm project said months ago that there was some work underway for a ROCm-native in Julia, like CUDAnative. Not sure if thereâ€™s any progress on that front.

EDIT: HIP/ROCm only work on Linux. I think any distribution is supposed to work starting with Kernel 4.18, but at the moment only Ubuntu 16.04, and RHEL/CentOS 7.4 & 7.5 are officially supported.

This probably doesnâ€™t qualify as â€śsimpleâ€ť.

Also, want to point out that HIP/ROCm is pretty good. Their BLAS libraries were almost twice as fast on Vega as CLBLAS when I benchmarked them about five months ago.

Iâ€™m trying to use ArrayFire.jl, but when I try some linear algebra operations I get error with MKL

```
using ArrayFire;
a = rand(10, 10);
ad = AFArray(a);
det(ad)
```

`Intel MKL FATAL ERROR: Cannot load libmkl_avx2.so or libmkl_def.so.`

Simple operations with variable `ad`

like `sin(ad)`

or `maximum(ad)`

do work. However, operations like `det`

, `qr`

or `svd`

donâ€™t.

Iâ€™ve run the mkl script to fix PATH before running Julia

`source /opt/intel/mkl/bin/mklvars.sh intel64`

And also, I followed the comments on web [1]:

```
conda install nomkl numpy scipy scikit-learn numexpr
conda remove mkl mkl-service
```

But none of this commands solved my problem.

Iâ€™m open to suggestions

For reference:

[1] https://stackoverflow.com/questions/36659453/intel-mkl-fatal-error-cannot-load-libmkl-avx2-so-or-libmkl-def-so

Conda is a Python package manager.

The following shared libraries (and versions) come with (a recent) ArrayFire:

```
libafcpu.so libaf.so libforge.so.1.1.0 libmkl_mc3.so
libafcpu.so.3 libaf.so.3 libglbinding.so.2 libmkl_mc.so
libafcpu.so.3.6.1 libaf.so.3.6.1 libglbinding.so.2.1.4 libmkl_tbb_thread.so
libafcuda.so libcublas.so.9.2 libmkl_avx2.so libnvrtc-builtins.so
libafcuda.so.3 libcufft.so.9.2 libmkl_avx512.so libnvrtc.so.9.2
libafcuda.so.3.6.1 libcusolver.so.9.2 libmkl_avx.so libOpenCL.so.1
libafopencl.so libcusparse.so.9.2 libmkl_core.so libtbb.so
libafopencl.so.3 libforge.so libmkl_def.so libtbb.so.2
libafopencl.so.3.6.1 libforge.so.1 libmkl_intel_lp64.so
```

You can see that `libmkl_avx2.so`

and `libmkl_def.so`

are in that list. Make sure theyâ€™re in your path, so Julia can find them.

However, that it was trying to use `mkl`

sounds like it was trying to do the calculations on your CPU, and not your GPU?

Iâ€™m not familiar with ArrayFire.

Yep, all these files are there. And in my PATH

```
> echo $LD_LIBRARY_PATH
:/opt/arrayfire/lib
```

Should I change others PATH variables too ?

Hmm, just installed ArrayFire andâ€¦

```
julia> a = rand(Float32, 10^3, 10^3);
julia> ad = AFArray(rand(Float32, 10^3, 10^3));
julia> @time det(ad)
Intel MKL FATAL ERROR: Cannot load libmkl_def.so.
```

EDIT:

Also, when I tested last November, ArrayFire did not support Vega yet:

https://groups.google.com/forum/#!searchin/arrayfire-users/Vega|sort:date/arrayfire-users/SupCI8sTcdM/oiDH6bXtCAAJ

Not sure if thatâ€™s changed. Matmul works, but for large matrices is about 3x slower than HIP BLAS (still faster than CPU).

I am not sure what the problem is.

There are a lot of calls here, and most work.

I suspect it is the linear algebra ones that do not. Theyâ€™re the ones that had problems then, too.

```
julia> qr(afc)
Intel MKL FATAL ERROR: Cannot load libmkl_def.so.
```

Just for the record, Iâ€™ve just tried a fresh, new install of ArrayFire and Iâ€™m getting the exact same problems

Intel MKL FATAL ERROR: Cannot load libmkl_avx2.so or libmkl_def.so.

â€¦which makes it a pity as I was hoping to use the library for something more interesting that simply adding or multiplying matrices

Does anybody know the status of that?

BTW Iâ€™ve installed in ubuntu mate 18.04

Best,

Ferran.

Hi, it works on osx for me.

```
julia> using ArrayFire
ArrayFire v3.6.0 (OpenCL, 64-bit Mac OSX, build 49a5436)
[0] APPLE: AMD Radeon Pro 560 Compute Engine, 4096 MB
-1- APPLE: Intel(R) HD Graphics 630, 1536 MB
julia> a = rand(Float32, 10^3, 10^3);
julia> ad = AFArray(rand(Float32, 10^3, 10^3));
julia> @time det(ad)
1.808117 seconds (1.56 k allocations: 84.891 KiB)
(Inf, 0.0)
julia> @time det(ad)
0.061030 seconds (7 allocations: 224 bytes)
(Inf, 0.0)
julia> @which det(ad)
det(_in::ArrayFire.AFArray) in ArrayFire at .julia/v0.6/ArrayFire/src/wrap.jl:1585
```