Almost certainly this has been thought of, but surprisingly I haven’t been able to find any mention of it on discourse and not really any repos…(although I noticed VulkanCore is perhaps waking up )
Has there been interest in working on compiling Julia to SPIR-V? It seems like a natural extension of the work done by @maleadt with NVPTX via CUDAnative and @sdanisch on OpenGL rendering in Makie/AbstractPlotting.
I see recent discussions like these and recall this blog post.
Would it be foolish/naive to say that Julia (or a subet of it) is extremely well-poised to work as a shader language and/or abuse the compute shader functionalities in Vulkan?
1 Like
We were originally hoping that we could target SPIR-V compute as a cross-platform intermediate representation to target NVIDA, AMD and Intel accelerators. This didn’t materialize since NVIDIA doesn’t (to my maybe outdated knowledge) support SPIR-V compute or OpenCL 2.x, AMD focused on HSA, and LLVM still doesn’t support SPIR-V as a virtual target, albeit there are converter libraries from LLVM IR to SPIR-V.
It will be interesting to see where Intel’s effort end up since their SYCL work uses SPIR-V as a intermediate representation. https://github.com/intel/llvm/blob/sycl/sycl/ReleaseNotes.md and https://github.com/intel/llvm/tree/sycl/llvm-spirv and OpenCL 2.1 Reference Pages
OpenCL 2.1 is only really supported by Intel in their newish GPU driver and their beta CPU driver so there hasn’t been a lot of time and motivation to do something similar to CUDAnative.jl
@jpsamaroo has been working hard on https://github.com/JuliaGPU/AMDGPUnative.jl, but that is targeting HSA right now , in theory we could target OpenCL as well (and load binaries not IL), which may be attractive for Mac Users, alas both me and Julian are not Mac users so we are not really able to work on that.
2 Likes
I saw the work on AMDGPUnative.jl but I am confused–it seemed like AMD was moving away from HSA when announced that HCC is being deprecated??? The focus is now just on HIP…but then I could be mistaken. Their APIs are well-intentioned but seem like mess to me.
I thought SPIR-V is supported by all vendors by way of its association with Vulkan? All shaders pre-compile to SPIR-V before runtime whether its HLSL or GLSL and Vulkan has Nvidia driver support…?
I believe there’s some speculation that Vulkan may eventually absorb OpenCL too?
My thought wasn’t really about OpenCL as much as using Julia for graphics shaders and perhaps doing unique things with the compute layer in lieu of OpenCL or perhaps in combination (there are examples out there of people abusing the compute shaders for GPGPU)
I think Khronos is putting a lot of work in with LLVM with regard to Vulkan/SPIR-V. Your dream may actually not be dead…
1 Like
HCC is really just an extra layer of C++ syntactic suger on top of regular C++ that’s fed into Clang (and thus LLVM); I think this was just a stop-gap measure that they setup before HIP was ready to roll. Regardless, both HCC and HIP target HSA, as HSA is just the runtime and kernel launch functionality that gets compiled code running on a physical device. AMDGPUnative.jl serves a similar purpose to HCC or HIP in this regard, targeting HSA as its runtime (via HSARuntime.jl), just with Julia as the high-level language instead of C++.
Ah, I get it now. How does that intersect with the ROCm supplementary libraries? (e.g. rocFFT) Is the idea to have it be “mixable” as it is now with the Julia CUDA libraries?
In any case, it sounds like having NVPTX + HSA + SPIR-V Julia backend implementations would be quite complementary and could go a long way to support not only core numerics (native Nvidia/AMD GPGPU targeting + potentially OpenCL via SPIR-V), but also visualization and simulation communities (a Julian shader language compiling to SPIR-V, injectable to packages like AbstractPlotting.jl)…
As someone who wanted to try out the GPUArrays ecosystem on an older (just pre-ROCM) gpu, this would be great to see!
To my knowledge, CodePlay’s ComputeCpp, POCL and the Mesa driver stack on Linux all do/will accept OpenCL-flavoured SPIR-V and compile it down to PTX/AMDGCN/HSA/what have you (I believe Mesa even uses the LLVM-SPIR-V translator for part of this). https://github.com/kpet/clvk and clspv are also probably worth a look.
As for Vulkan Compute, I noticed (thanks to @jekbradbury’s talk) that the MLIR folks have been working on a SPIR-V Dialect and Vulkan test bench. Not sure if that’s indicative of any longer term goals, but it’s certainly exciting to see.
Everything is modeled very similarly to how the CUDA-Julia ecosystem works; we’ll have ROCArray
s which have dispatches for the various optimized BLAS, Sparse, FFT, etc. libraries.