How to get started with GPU programming? OpenCL or CUDA?

I flagged this conversation while I was travelling, so I’m coming late to the game. I agree with the answers above and, of course, @sdanisch is one of the Julia experts on GPU programming.

I’d just add that, given your above answers to the comments,

  1. Stick with OpenCL for now. CUDA will potentially give marginally better performance (it will on an nvidia card) but you have a faster card right now which does not support CUDA.
  2. Julia removes most of the harder parts of doing OpenCL. The OpenCL package does most of the verbose C stuff for you, you just need to write a kernel.
  3. Don’t be afraid of kernel writing, it’s basically the core of your for loop. It’s the sending stuff (memory) to and from the device that’s difficult and Julia’s OpenCL package will do this for you.
  4. The Julia abstractions of some core GPU functionality are useful (GPUArrays, etc) but should be largely treated as a separate approach to the writing of a kernel [I’m giving starting-out advice here, mixing the paradigms is going to get confusing]
  5. Your comments about LLVM code generation are correct, but this appear as Vulkan, the update to OpenCL, which we’re all hoping for. So I wouldn’t waste too much time trying to get it to work with the current Julia/llvm system, that will be a big change and much more worth targetting.

If it can be of any help, please help yourself liberally to code from two workshops that I ran this year:

3 Likes