I installed cuda10.2 on a ubuntu 18.10, and was able to build and run the some examples from the Cuda example folder, such as devicequery.
But when I use Julia1.0.5LTS, and CuArray can’t find libcuda.
and using JULIA_DEBUG=CUDAapi does not show me the directory probe information… Not sure how to solve this issue at this point. Would someone be able to point me to the right direction? Thanks in advance.
$ JULIA_DEBUG=CUDAapi julia-1.0.5
_
_ _ _(_)_ | Documentation: https://docs.julialang.org
(_) | (_) (_) |
_ _ _| |_ __ _ | Type "?" for help, "]?" for Pkg help.
| | | | | | |/ _` | |
| | |_| | | | (_| | | Version 1.0.5 (2019-09-09)
_/ |\__'_|_|_|\__'_| | Official https://julialang.org/ release
|__/ |
julia> using CuArrays
[ Info: CUDAdrv.jl failed to initialize, GPU functionality unavailable (set JULIA_CUDA_SILENT or JULIA_CUDA_VERBOSE to silence or expand this message)
[ Info: Recompiling stale cache file /home/ctrotter/.julia/compiled/v1.0/CuArrays/7YFE0.ji for CuArrays [3a865a2d-5b23-5a0f-bc46-62713ec82fae]
Nowhere, it should be readily available since it’s a OS/kernel-dependent library. So it should be discoverable using dlopen without additional arguments. If it isn’t, you might had to add it to ld.so.conf or use LD_LIBRARY_PATH as a workaround.
I added /usr/lib/x86_64-linux-gnu/ to LD_LIBRARY_PATH in an effort to solve this issue. Originally it was just /usr/local/cuda-10.2/lib64 and I see that libcuda.so is not there.
Looking at where libcuda.so is located, I notice there is one more level down the directory where libcuda.so (…/lib/stub/libcuda.so). But adding /stubs to the conf file didn’t solve the issue Edit: I put /usr/local/cuda-10.2 in cuda-10-2.conf file, and reboot, and it worked, too.
Eventually, I solved this by doing one of the following.
Add the location of libcuda.so to LD_LIBRARY_PATH. In my case, the last line of the result shown with locate libcuda.so $export LD_LIBRARY_PATH=/usr/local/cuda-10.2/targets/x86_64-linux/lib/stubs/:$LD_LIBRARY_PATH
Or make a link in /usr/lib/x86_64-linux-gnu/ $sudo ln -s /usr/lib/x86_64-linux-gnu/libcuda.so.1 /usr/lib/x86_64-linux-gnu/libcuda.so
I found out this solution because Libdl.dlopen("libcuda.so.1") works.
Given that not everyone have sudo privilege, maybe the first one is a better solution.
I probably should open another thread for this question, but maybe you @maleadt have a quick answer.
Can the CUDA.jl calls be compiled into an executable?
In other words, I am compiling everything in my application to an executable for deployment, some of the functions require GPU. If this is not a short answer, I will be happy to create another question and include a minimal working example.
PackageCompiler-style? No, that’s not supported. It used to be possible to write the compiled PTX kernels to disk and load them instead of recompiling, but that didn’t help much (compilation is fairly quickly, tens of ms for reasonable kernels, it’s the initial compilation of CUDAnative that takes long).