Thank you for taking the time to read my post, I really appreciate it! I am currently a Windows using, C# developer, but have been learning Julia for some data research and personal projects. I want to learn how to modify my applications to use a GPU, but I am stuck right at the beginning.
My current setup is, Windows 10 + an AMD R9 290. Which makes me lean towards OpenCL.jl, but it seems that there is a lot more support for CUDA in general, and I really like how the CUDAnative.jl package sounds. While I program in C#, not having to write my kernels in C/C++ would be a bonus to me. I’m hesitant to switch to a NVIDIA GPU, as my end application is financial models, so I am assuming that I will want to use double precision?? and AMD consumer cards seem to excel here.
So I guess my main questions are:
- If you do a lot of GPU programming, is there no way around writing kernels in C/C++, so CUDA vs OpenCL is just a different API?
- If OpenCL uses LLVM, can’t Julia compile to it directly? Could an OpenCLnative.jl be made? (sorry if this is a novice/stupid question)
- AMD ROCm (http://developer.amd.com/tools-and-sdks/radeon-open-compute-platform/) what is this? Seems to be a subset of C++ that compiles (LLVM) your code differently depending on your hardware? Is this useful or a compilation path a Julia package could do? This is open source, could Julia wrap it maybe?
- Even if it isn’t Julia based, is there a good tutorial or course you would recommend to get started? I don’t mind if it is in Linux.
- Half, single, and double precision - how important are these speeds, and are they application/field dependent? Does double precision use more GPU memory, and thus that could be the bottleneck?
Thank you for reading,