[ANN] Raycore.jl: High-Performance Ray Tracing for CPU and GPU

I’m excited to announce Raycore.jl, a high-performance ray-triangle intersection engine with BVH acceleration for both CPU and GPU execution in Julia.

Raycore will power a new raytracing backend for Makie, bringing photorealistic rendering to the ecosystem.
We factored out the ray intersection engine since it can be used in many different areas, so we’re very curious to see what the community will create with it and how far we can push the performance over the years.
The package includes interactive tutorials covering everything from basics to advanced GPU optimization.

Key features:

  • Fast BVH construction and traversal
  • CPU and GPU support via KernelAbstractions.jl
  • Analysis tools for radiosity and thermal applications
  • Written in simple and pure Julia - contributions should be much easier than for most other ray intersection libraries

Read the full announcement: https://makie.org/website/blogposts/raycore/
GitHub: GitHub - JuliaGeometry/Raycore.jl: Performant ray tracing on CPU and GPU
Docs: Home · Raycore


64 Likes

Can Raycore use RTX-cores and similar (e.g. via Vulkan.jl)?

I guess it’s possible, but the question is how we’d integrate it.
As long as we dont have proper Julia → SPIR-V compilation, we wouldn’t be able to hand it Julia callbacks for the shading… Although once we have that, and I heard we could have it, it would certainly be a killer feature.
Meanwhile, we could create an API compatible pure Vulkan + GLSL backend, which then has to implement shading in another language, but should be super fast.

I’m asking because I recently came in contact with some people who use Vulkan (in C++ with a Python frontend) for vendor independent (but RTX-capable) optics simulations for physics. I think the Vulkan.jl wrapper exposes the relevant API-calls for BVH and batched ray-triangle intersection (I’m really not an expert though), but I don’t know what the cost/latency of copying data back and forth between the backends (with shading and so on done via KernelAbstractions) would be. If we had a direct SPIR-V KA-backend on top of Vulkan.jl …

1 Like

I definitely think we could have something like that in the future, completely in Julia :slight_smile:
Not sure when we’ll manage to get proper SPIRV support running, but the puzzle pieces seem to be all there.

2 Likes

We do have this for OpenCL-flavored SPIR-V already, but unfortunately that is very different to GLSL-flavored SPIR-V. There is GitHub - serenity4/SPIRV.jl: Read, process and generate SPIR-V code from Julia however, @serenity4 might be able to comment on how feasible it is to use that for implementing a ray tracer.

This looks cool. Does it support 64 bit floats, which is necessary for wave optics simulations?

My understanding is that 64 bit float performance for lower cost GPU’s (< $2000) is much lower than 32bit performance.

Does Raycore.jl using 64bit GPU code run faster than 64bit CPU code?

I’ doing wave optics all the time with 32bits (32bits real part + 32bits complex part). I’m curious why we would need 64bits?

Wow! This is so cool!
Thanks a lot for sharing :heart_eyes::sparkles:

1 Like

I didn’t do a detailed error analysis but here’s the idea. Assume you have a lens focusing on a point source 1m away and that you want to optimize the lens based on optical path difference between rays emanating from that point.

Assume 550nm wavelength and that you want no more 1/10 wave error: the optical path difference between all the rays should be less than 55nm.

Assume ray one has optical path length 1m. Then all the other rays will be 1m ± 55nm at most. Here’s what happens when you add 55nm to 1m in Float32:

julia> using Unitful,Unitful.DefaultSymbols

julia> (1.0f0m + 55f0nm) - 1.0m #55nm path difference completely lost because of insufficient precision
0.0 m

Float32 doesn’t have enough bits to represent such a small difference. You have zero bits of precision in your answer.

This assumes no roundoff error in any of the ray/surface intersections or refraction/reflection, which would make the problem worse.

Maybe the optical systems you have been simulating have been on a different scale, or the required precision wasn’t quite as high so you never encountered this problem.

4 Likes

I wonder if this library could be efficiently used for physics computations. In GitHub - triscale-innov/Rayons.jl we basically need to compute ray/surface intersections and recursively launch 2 new rays (refracted+reflected) with the associated energy share.

1 Like

Ok get it.

I think most wave-optical simulations I do encounter, people never explicitly model a lens as a phase transforming object since the sampling is basically impossible for anything larger than 1mm (exp(1im * k / (2 * f) * (x^2 + y^2)).
So most approaches see lenses as Fourier transformers and then apply angular spectrum near space propagation where you do not have ~meter distances but rather propagation distances of ~millimeter.

Especially, in works like large scale (inverse) problems such as diffraction tomography or holography (see this recent example)), everyone is using Float32 on GPUs.