I am trying to use the Mocha package, and I’ve downloaded the Deep Neural Network cuda library from nVidia. I put a copy of it in /libs and another in ~/.julia/v0.5/Mocha/Deps. No luck running Pkg.test(“Mocha”). If I turn off the GPU backend it works fine, so I know all other parts of my setup are correct. The Mocha documentation skips the part of what to do once you download the shared library.
I know this is off topic but I couldn’t find an appropriate discourse like site for Mocha. If there is one, please redirect me there. Thanks!
Mocha seems to use regular Base.find_library
, so depending on your library add something to eg. LD_LIBRARY_PATH
(Linux) or edit the relevant find_library
calls (ie. the ones discovering CUDA and cuDNN in respectively cuda.jl
and cudnn.jl
) to include your CUDA toolkit library directory in the (currently empty) array of directories as second argument.
Their build system should probably discover your library, or provide a way to point towards it (eg. CUDA_HOME
/CUDA_ROOT
env var).
2 Likes
OK that is sound advice! I will give it a try later today. The machine I am using it on here at work has two Titan cards in it, and the option pricing black scholes demo calculated 25 billion prices a second. So I am keenly interested to see if I can get similar computing power with my large data sets.