I have installed a small GEForce GT-610 in my box and in BIOS directed the video output to the IGFX which hopefully leaves the CUDA card free for processing numbers.
I had a problem getting CUDAnative,jl installed and would appreciate advice. I know the card works on the video side since openSUSE and MB BIOS defaulted to it on install and I corrected this. However I have yet to get an acknowledgement that the CUDA side is working and available. I have the CUDA toolset v.8 installed from NVidia. I’m using Julia v 0.6 dev 2444.
Following the installation guide for CUDAnative.jl I ran Pkg.add(“CUDAnative”) which ended with:
INFO: Building CUDAnative
WARNING: deprecated syntax “local t0=CuEvent(), t1=CuEvent()” at /home/colin/.julia/v0.6/CUDAdrv/src/events.jl:65.
Use “local t0, t1 = CuEvent(), CuEvent()” instead.
=============================[ ERROR: CUDAnative ]==============================
LoadError: InitError: CUDAdrv.CuError(100,Nullable{String}())
during initialization of module CUDAdrv
while loading /home/colin/.julia/v0.6/CUDAnative/deps/build.jl, in expression starting on line 3
================================================================================
ERROR: UndefVarError: CUDAdrv not defined
in deserialize(::SerializationState{IOStream}, ::Type{Module}) at ./serialize.jl:611
in handle_deserialize(::SerializationState{IOStream}, ::Int32) at ./serialize.jl:590
in deserialize(::SerializationState{IOStream}) at ./serialize.jl:550
…
Thank you for reporting this. Can you show us the output of Pkg.status()?
I just tagged a release that removes the deprecation warnings on v0.6. You can test that release right now by doing Pkg.checkout("CUDAdrv"). The do a Pkg.test("CUDAdrv") to verify that your GPU and CUDAdrv get along.
Here is my Pkg.status after the checkout recommended and Julia updated to dev 2464 and Pkg.update(). Pkg.test of CUDAdrv now produces the same error without the warning in the previous report:
Hmmm, here is something else, I will try to sort this out:
> using CUDAdrv
ERROR: InitError: No CUDA-capable device (CUDA error #100, ERROR_NO_DEVICE)
Stacktrace:
[1] init() at /home/colin/.julia/v0.6/CUDAdrv/src/base.jl:61
[2] __init_library__() at /home/colin/.julia/v0.6/CUDAdrv/src/base.jl:78
[3] __init__() at /home/colin/.julia/v0.6/CUDAdrv/src/CUDAdrv.jl:29
[4] _include_from_serialized(::String) at ./loading.jl:157
[5] _require_from_serialized(::Int64, ::Symbol, ::String, ::Bool) at ./loading.jl:194
[6] _require_search_from_serialized(::Int64, ::Symbol, ::String, ::Bool) at ./loading.jl:224
[7] require(::Symbol) at ./loading.jl:409
during initialization of module CUDAdrv
It’s a permissions issue. I can get CUDAdrv and CUDAnative installed as root, but then as @vchuravy warns I run into the no kernel image available error. I will dig for more info on the permissions.
I think the problem is that you set the integrated GPU(the intel one) to be active.
I suspect that the driver is not even loaded by the kernel.
It used to be that only in Tesla GPU you could work headless, I don’t know what is the situation today.
you should research that and get to the point
where the GPU is working (try running nvidia-smi on linux)
And only then try the julia wrapper CUDAnative
(Embarrassed) I’m afraid it is a case of RTFM. The permissions issue was a result of my failure to add the user to the video group according to the NVIDIA explicit instructions on the installation of CUDA. Now I can Pkg.add both CUDAdrv and CUDAnative as a regular user. Julia tests pass on CUDAdrv but fail on CUDAnative so I will close this query and start a new one. Thank you much for your assistance and concern.