Ray Tracing in a week-end - Julia vs SIMD-optimized C++

Point (1) can be solved with a constructor like Sphere(v,d,m) = Sphere{typeof(m)}(v,d,m)

I played a little with the code because of (2). I would do two things with the HitPoint type: make it immutable and parametrize the material as well. Thus, in the end I did this:

# ╔═╡ 3b570d37-f407-41d8-b8a0-a0af4d85b14d
"Record a hit between a ray and an object's surface"
struct HitRecord{M}
	t::Float32 # vector from the ray's origin to the intersection with a surface. 
	# If t==Inf32, there was no hit, and all following values are undefined!
	p::SVector{3,Float32} # point of intersection between an object's surface and ray
	n⃗::SVector{3,Float32} # local normal (see diagram below)
	# If true, our ray hit from outside to the front of the surface. 
	# If false, the ray hit from within.
HitRecord() = HitRecord{NoMaterial}(Inf32,zero(SVector{3,Float32}), zero(SVector{3,Float32}), false, NoMaterial()) # no hit!
HitRecord(t,p,n⃗,front_face,mat) = HitRecord{typeof(mat)}(t,p,n⃗,front_face,mat)

# ╔═╡ 138bb5b6-0f45-4f13-8339-5110eb7cd1ff
struct Sphere{M} <: Hittable
Sphere(c,r,m) = Sphere{typeof(m)}(c,r,m)

(note the constructors that solves problem 1 here as well)

That, unfortunately, didn’t improve performance, and why is that is a bit more complicated, probably. You are reaching probably a dynamic dispatch point inside the hit function, and making the types concrete is not helping. In general those would be good advice, I think.

Probably on alternative would be to make a concrete more general “Material” which might carry the properties of all materials and a flag, to avoid run time dispatch. But that is less elegant than what you have.

1 Like

I’m almost done merging @woclass’s Vec3{T} optimization… I hope to push the change tomorrw. I confirm that Float64 runs ~7% faster. I ran the full benchmark and got this result:

Previous measurement after lots of other optimizations including @inline lots of stuff:
6018.101653 seconds (5.39 G allocations: 241.139 GiB, 0.21% gc time) (1.67 hour)

New measurement after 3 major changes:
@woclass’s rand(Float32) to avoid Float64s: (expected to provide 2.2% speed up)
@woclass’s “Use alias instead of new struct”, i.e. const HittableList = Vector{Hittable}
@woclass’s Vec3{T} with T=Float64: (7.8% speed-up!):
5268.175362 seconds (4.79 G allocations: 357.005 GiB, 0.47% gc time) (1.46 hours)

That’s ~14% speed up, and I learnt a lot from your suggestions, thanks everyone!

Next I plan to:

  1. Run the C++ benchmark on my PC, confirm exactly what algorithmic changes they have which I don’t (to have a more fair comparison)
  2. Read that blog on the rand performance, try that
  3. Read on the SIMD libraries, try to figure out the best approach. (I’ll welcome any suggestion you have!)
  4. Probably pre-allocate the ray bundles/paths (I don’t know how they are called in the litterature) to later simplify the GPU (and probably the SIMD) implementation.



If instead of a abstract type of material you use a union type, using:

	struct NoMaterial end
	const _no_material = NoMaterial()
	const _y_up = @SVector[0f0,1f0,0f0]

struct Metal
	fuzz::Float32 # how big the sphere used to generate fuzzy reflection rays. 0=none
	Metal(a,f=0.0) = new(a,f)

struct Lambertian

struct Dielectric
	ir::Float32 # index of refraction, i.e. η.

const Material = Union{NoMaterial,Metal,Lambertian,Dielectric}

and make all structs immutable, you get almost no allocations in the first render function:

julia> c = default_camera()
Camera(Float32[0.0, 0.0, 0.0], Float32[-1.7777778, -1.0, -1.0], Float32[3.5555556, 0.0, 0.0], Float32[0.0, 2.0, 0.0], Float32[1.0, 0.0, 0.0], Float32[0.0, 1.0, 0.0], Float32[0.0, 0.0, 1.0], 0.0f0)

julia> s = scene_4_spheres()
HittableList(Hittable[Sphere(Float32[0.0, 0.0, -1.0], 0.5f0, Lambertian(Float32[0.7, 0.3, 0.3])), Sphere(Float32[0.0, -100.5, -1.0], 100.0f0, Lambertian(Float32[0.8, 0.8, 0.0])), Sphere(Float32[-1.0, 0.0, -1.0], 0.5f0, Metal(Float32[0.8, 0.8, 0.8], 0.3f0)), Sphere(Float32[1.0, 0.0, -1.0], 0.5f0, Metal(Float32[0.8, 0.6, 0.2], 0.8f0))])

julia> @benchmark render($s,$c, 96, 16)
BenchmarkTools.Trial: 187 samples with 1 evaluation.
 Range (min … max):  26.035 ms …  36.614 ms  ┊ GC (min … max): 0.00% … 0.00%
 Time  (median):     26.559 ms               ┊ GC (median):    0.00%
 Time  (mean ± σ):   26.749 ms ± 929.174 μs  ┊ GC (mean ± σ):  0.00% ± 0.00%

    ▃▂▁▁▄▃▅█ ▆▂▁▁  ▁                                            
  ▆▃█████████████▄▅█▄▄▄▄▁▃▁▃▁▃▃▄▃▃▁▁▁▁▁▁▁▃▁▃▁▁▁▁▁▁▁▁▃▃▃▄▁▁▁▁▁▃ ▃
  26 ms           Histogram: frequency by time         29.2 ms <

 Memory estimate: 60.80 KiB, allocs estimate: 2.

This means almost certainly that we got rid of dynamic dispatch and type instabilities. The performance in this small example does not improve (actually it is slightly worse here), but for a larger problem that may be much better. To test.

The use of a union instead of an abstract type is another workaround to suggest to the compiler to do automatic union splitting, but of course it is again less elegant.

For the records, with the abstract material type, I had:

julia> @benchmark render($s,$c, 96, 16)
BenchmarkTools.Trial: 240 samples with 1 evaluation.
 Range (min … max):  19.864 ms … 29.133 ms  ┊ GC (min … max): 0.00% … 0.00%
 Time  (median):     20.285 ms              ┊ GC (median):    0.00%
 Time  (mean ± σ):   20.910 ms ±  1.521 ms  ┊ GC (mean ± σ):  1.83% ± 4.79%

  ██▄▂▂▃▆▁ ▁                                                   
  ██████████▇▆█▁▆▆▄▄▆▄▄█▆▆▁▇▁▄▆▁▁▄▁▁▁▁▄▄▆▄▄▇▆▇▄▁▁▇▁▆▄▄▁▄▁▁▁▁▄ ▆
  19.9 ms      Histogram: log(frequency) by time      25.9 ms <

 Memory estimate: 6.40 MiB, allocs estimate: 138409.

With the union type it is slower in this small test, but note the huge difference in the allocations.


FYI, I finished integrating @woclass’ changes and pushed them to github, in the proto.jl file. I plan to cut-and-paste them in the Pluto.jl notebook next week-end.


Thanks DNF for bringing this up! I tried Xoroshiro, here’s my latest commit:

Use Xoroshiro128Plus RNG
- large speed-up for small random functions…
- small speed up in some small scenes
- …but no noticeable difference in large scenes with high sample-count

… so the bottleneck appears to lie somewhere else (HitRecord allocations?)

Thanks anyway!

1 Like

My quick test suggested that indeed the bottleneck is there.

Probably profiling is what is needed here. It is easy to do that with the vscode extension..

1 Like

My latest commit:

4.35X speed-up using 16 threads
- 20.16 minutes for the 1920x1080x1000 random_spheres!
- Easiest optimization ever… not even a whole line of code! :slight_smile:

I re-located the SIMD-optimized C++ implementation (GitHub - GPSnoopy/RayTracingInOneWeekend: Implementation of Peter Shirley's Ray Tracing In One Weekend book.) and I was remembering incorrectly. GPSnoopy reports:

The cover picture was rendered in about 1h running 16 threads on a Core i9-9900K @ 4.7GHz at 3840x2160 with 1024 samples per pixel. Clocking the CPU at 4.8GHz when running AVX code would cause it to reach over 90C, so be careful when running this code on an overclocked processor.

I tried to update my original description accordingly, but it looks like I don’t have the permission.

My tests were on 1920x1080 (i.e. 4 times less pixels), so my 20mins current record on a Ryzen2 is probably roughly equivalent to his 1 hour on his overpriced i9. :wink:

I’ll try to run his code next, and hopefully their C++ compiler makes an attempt to be competitive on my AMD processor.

Stay tuned! :slight_smile:


Just tried your Pluto notebook, really cool way to teach RT (and specifically the Raytracing in One Weekend series) like this! I do wonder how well the code edits and optimizations you’re doing above work in the Pluto notebook. Or are you doing those “offline”, run the tests, and only then check in the notebook again?

1 Like

how well the code edits and optimizations you’re doing above work in the Pluto notebook. Or are you doing those “offline”, run the tests, and only then check in the notebook again?

I did all the initial implementation in Pluto.jl, I found it worked pretty well although it was getting long (and Pluto doesn’t allow us to cut-and-paste more than one cell at a time, which was painful to reorganize the code).

Now I’m using vscode to edit proto.jl so I can run @btime and get precise measurements. I checked long ago and this didn’t seem to work in Pluto.

Later today I plan to update the Pluto code with the massive (~12X) speed-ups from the past few days…

i think pluto does allow you to copy/paste multiple cells. you just have to start a mouse drag outside of a cell and then move your mouse into the range of multiple cells, they should highlight to indicate that they are in the selection. then you can use your keyboard to do the usual clipboard commands (for me, it’s command-c and command-v on a mac).

I’ve been enjoying this thread. So many twists and turns. I feel like the journey from the starting code and all the changes that incrementally improve performance could be a great narrative would make for a really good youtube video.

Thinking out aloud, perhaps even good to have a session at JuliaCon for such narratives.


Funny, I was thinking in asking if at the end some summary could be made about the changes that lead to a ~12x speed-up.

I see that speed-up as good and not-so-good news. Good because, well, a 10 fold speed up is fantastic. Not-so-good because I feel I belong to those who think they have clever code but it turns out that deep tricks can make a huge difference.


This is why all benchmarks comparing compiled languages end up being senseless when looked only from the point of view of the final performance. If some code is very optimized in one language, there is one effort that is necessary to achieve something reasonable (within a factor ~2). But the effort to get the last bit of performance can be huge, and worst, can be platform-dependent, test case dependent, etc.

That other thread about ray tracing was very curious in this aspect: low-level tricks were used in the Nim implementation and with that it beats all other languages. However, that was only for the exact same problem that was being benchmarked. If one changes somewhat the scene, the profiles change.

What I think is important in a study like this is go back and see what really made a difference. For example, inlining which function? Why? I have many times the experience that a lot of “code optimization” that I thought was being useful was actually just bad benchmarking, and a few important tips and changes make the greatest difference.


Defining a global:

using RandomNumbers.Xorshifts
const _rng = Xoroshiro128Plus()

is almost certainly a bad idea in the multithreaded code.
This is not thread safe, and will also hurt performance.

Also, disappointing that the TaskLocal rng in Julia 1.7 is slower than this, but Random.Xoshiro works well.

So I’d suggest allocating one Xoroshiro128Plus() or a Random.Xoshiro per thread, and passing it around.


If you are curious, I implemented ray tracing some time ago in Julia as well: GitHub - pxl-th/Trace.jl: Ray tracing
Mostly following this excellent book: Physically Based Rendering: From Theory to Implementation.

It also has multithreading support and BVH acceleration structure. I also spend some time trying to improve performance. Most of performance improvements I got was from the eliminations of allocations in ray → object intersections routines. But it still has a lot of stuff that can be improved, ideally I thought of writing some kind of custom allocator to remove all alocations, but I got lazy shortly after :slight_smile: .

It is able to simulate caustic effects using Stochastic Progressive Photon Mapping, though, which was my main goal (see image below).


Heh, I was wondering if porting PBRT to Julia was going to give cleaner (and fewer lines of) code, while still providing comparable performance. Impressive what you managed to achieve!

And since we’re posting ray-tracing-in-Julia projects, here’s another one (substantially less ambitious), which was an experiment of porting existing Python raytracing code (don’t ask) to Julia. Back then I wrote down details of the optimization journey.


I’m up for a JuliaCon talk if others are interested. :slight_smile: I’m planning to present the results internally at AMD to hopefully rally more Julia supporters.


I’d like that as well, but I didn’t have time to research how to do it… if you happen to know, can you show me?

IMHO just proving that, with some moderate level of expertise, it’s possible to match or exceed the performance of reasonably optimized C++ code, is a necessary step to convince C++ -centric organizations like my employer to take a closer look and back new projects that want to use Julia.

BTW, I now have more concrete comparison, with the C++ (GCC) and an ISPC implementation. The details are here:

The conclusion is:

This Julia implementation, with 14 threads, runs in 22m21s , i.e. now a bit faster than the equivalent GCC C++ version! (22m59s)

However the ISPC version is incredibly fast in comparison: 6m5s.

I also briefly reviewed the code of each version, to confirm that we’re implementing the same thing, i.e. same max number of bounces, no BVH, etc.

I also tried to enable various Julia optimizer settings, -O3 -ffast-math -march=znver2 and peppered @fastmath, @simd, @inbounds, and it made no difference. Memory allocations/traversal still seem to be the primary bottleneck.


Any chance we can get a disassembled version of the ISPC assembly? I’d be curious to see what it does differently.

1 Like