Ignoring GPU rendering, I think the main thing you need is a function that tells you the color which should go at a given position. For a uniform 3D cartesian grid like a 3D array provides that’s relatively easy. You just find the 8 grid points around the sample position and nest linear interpolations. A tetrahedron is probably also some combination of linear interpolations. Beyond that, no clue. If you want to try working it out, here’s roughly what :absorptionrgba
would look like for a single Rect3:
using Colors
using GeometryBasics
function get_interpolated_color(sample_pos, rect_positions, rect_colors)
# This relies on the order of vertices
mini = first(rect_positions)
maxi = last(rect_positions)
if sample_pos in Rect3f(mini, maxi .- mini)
color = mapreduce(+, rect_positions, rect_colors) do vertex_pos, color
interp = abs.(sample_pos - vertex_pos) ./ (maxi - mini)
return color * prod(interp)
end
return color
else
return Vec4f(0,0,0,0)
end
end
rect_positions = coordinates(Rect3f(-0.5, -0.5, -0.5, 1, 1, 1))
rect_colors = [Vec4f(0.5 + 0.5x, 0.5 + 0.5y, 0.5 + 0.5z, 0.1) for (x, y, z) in rect_positions]
width = 400
height = 300
img = [RGBA(0,0,0,0) for x in 1:width, y in 1:height]
R = Mat3f(0.7886752, -0.21132484, -0.57735026, -0.21132484, 0.7886752, -0.57735026, 0.57735026, 0.57735026, 0.5773503)
steps = 100
step_weight = 1.0
for (i, x) in enumerate(range(-1, 1, width))
@info i
for (j, y) in enumerate(range(-1, 1, height))
accumulated_color = Vec3f(0)
remaining_intensity = 1.0
for z in range(-1, 1, steps)
pos = R * Point3f(x, y, z)
sample_rgba = get_interpolated_color(pos, rect_positions, rect_colors)
sample_weight = step_weight * sample_rgba[4]
remaining_intensity *= (1 - sample_weight)
sample_color = sample_weight * remaining_intensity * Vec3f(sample_rgba)
accumulated_color += sample_color
remaining_intensity < 0.01 && break
end
img[i, j] = RGBA(clamp.(accumulated_color, 0, 1)..., 1 - remaining_intensity)
end
end
using GLMakie
image(img)
You’d have to swap out rect_positions
and rect_colors
for the vertex coordinates and colors of you mesh and write a get_interpolated_color()
function that returns the appropriate colors for your mesh. If you have that though, you could also just resample your data into 3D array and run the ray tracing part on the GPU with Makie volume plots.
When it comes to doing this on the GPU, you also have the problem that this doesn’t fit in the normal rendering pipeline. Regardless of which Graphics API you use, the normal rendering pipeline will have a vertex shader which acts on one vertex each, then the API will combine that data into faces and figure out which pixels are covered by them and then run a fragment shader to draw those pixels.
Makie essentially draws the bounding box of the volume plot and does ray tracing inside it for the fragment shader. To do the ray tracing you’ll need all volume data. If it’s on a regular cartesian grid you can simply put it in a 3D texture. That’ll also handle the interpolation for you. For anything else you’ll need to do the interpolation yourself and pack and unpack it yourself. (This could be a bit simpler with compute shaders but Makie doesn’t support them and you’d still need to handle data passthrough and interpolation.)
What stops us from doing this is mainly that it’s a lot of effort for something very specific.