Composite image of multiple channels

I have 3 images, each a separate channel of the same sample.

c1 = rand(100,100)
c2 = rand(100,100)
c3 = rand(100,100)

I’d like to make a custom color gradient for each channel:

using Plots
pal = palette(:batlow,3)
grad1 = cgrad([:black, pal[1]])
grad2 = cgrad([:black,pal[2]])
grad3 = cgrad([:black,pal[3]])

and plot them each separately and together in a composite image:

fig = plot(layout = grid(1,4),
    size=(2000,500),
    legend=false,
    colorbar=false,
    xticks=[],
    yticks=[],
    aspect_ratio=:equal,
    margin=5mm)

heatmap!(fig[1], c1, color=grad1)
heatmap!(fig[2], c2, color=grad2)
heatmap!(fig[3], c3, color=grad3)

But I’m not sure how to combine the channels to make the colored composite. Any tips?

An image of channels c1, c2, c3 can be defined as follows:

using Images
img= cat(c1, c2, c3; dims=3)
nimg= permutedims(img, (3, 1, 2))
img_rgb = colorview(RGB, nimg)

Is this the kind of image you intended to get?

Yes, I believe so, but how can I apply custom colorscales to each channel?

You have already defined the three heatmaps by mapping the values in c1, c2, respectively c3, to gradients.

Is there any reason to not use red, green, and blue for each of the individual channels? Otherwise there’s not really a sensible way to add three channels together without resorting to a luminance-based colorspace like YCbCr.

Accessibility, there are great palettes available for optimized perception for those who have and don’t have perfect color vision, e.g. this resource

Accessibility is a worthy consideration, but at the end of the day, colorblindness typically involves the absence of one of three types of color-perceiving cells, and you can’t project a space onto a lower-dimensional subspace without losing information. Below are the averages of pairwise combinations of your chosen color gradients ((grad1, grad2), (grad2, grad3), (grad1, grad3)) with the remaining gradient value held constant, and with the protanopic version right below:


I don’t think that adds much perceptual value to a colorblind user, and the trichromatic majority are better-served by red/green/blue:

They’re in the gradients, but how do I get those gradients to display with colorview? When I use the snippet you shared, it simply makes it rainbow-colored.

I suppose that you intend to get an image having in each pixel the corresponding color in the heatmap. But the ColorGradient grad1, for example, is just a 2-element Array{RGBA{Float64},1}. heatmap associates to each c1-value the corresponding color in the ColorGradient, resulted by interpolating the two end colors. However we cannot access the color codes (resulted by interpolation), to associate an image to the heatmap.

When I use the snippet you shared, it simply makes it rainbow-colored:
I defined an image having as Red, Green, Blue components the values in c1, c2, respectively c3, as an answer to the vaguely formulated question: But I’m not sure how to combine the channels to make the colored composite. Any tips?

Yes, thanks for pointing out the ambiguity in the original question and for your time helping with this! … I’ll use an example with what I have in ImageJ to make it as concrete as possible. I think you summarized it well with “I suppose that you intend to get an image having in each pixel the corresponding color in the heatmap.”

I have two images with custom color mapping applied by making an interpolated LUT from black to the max RGB color value using ImageJ’s LUT Editor, similar to the gradient created with e.g. cgrad(:black, pal[1]):


And a composite made using the Merge Channels… tool:

(Have not settled on these colors as final choices)

How can I achieve a similar effect in Julia? That is, apply a custom LUT/gradient from black to a custom color choice (if not using cgrad, then is there some other way?) to each image, then blending the images (with ImageJ, it looks like simple RGB addition).

You can associate images to heatmaps defined by a matrix and a ColorGradient,
as in your example, as follows:

If you have an arbitrary matrix A= -3 .+ 5*rand(100, 100) for your heatmap
and

Am, AM =extrema(A)

are its extrema, then the associated normalized matrix is:

nA =(A .- Am)/(AM-Am)

Hence I explain how to associate an image to a heatmap, working with normalized matrices:

nm1 = rand(100,100)
nm2 = rand(100,100)

Let us define img1, img2, the images representing the heatmaps associated to nm1, nm2 and respectively grad2, grad3 in your initial example.
Then:

img1 = (1.0 .- nm1) .*grad2[1] .+ nm1 .*grad2[2]
img2 = (1.0 .- nm2) .*grad3[1] .+nm2 .* grad3[2]

A convex combination of the two images is a blended image, i.e.

blended_img = a *img1 .+ b* img2

where a, b >=0, and a+b=1.
The simplest example is

blended_image = 0.5 *img1 .+ 0.5*img2

For α-blending (compositing) see https://en.wikipedia.org/wiki/Alpha_compositing, where
the coeefficients, a, b in the above convex combination of images are:
a= α₁(1- α₂)/ ( α₁(1- α₂) +α₂)
b = α₂/( α₁(1- α₂) +α₂)
and α₁ , α₂ are the α- parameters for img1, respectively img2.

1 Like

Thank you! This is exactly what I was looking for.

Though on the img coefficients, it does not seem to be necessary that a+b=1:
blend = 1 * img1 .+ 1 * img2

blend = 2 * img1 .+ 2 * img2

if I have

α₁ = 1
α₂ = 1

that formula gives

a = 0.0
b = 1.0

For the purposes of this post though, scaling a and b to give a subjectively bright enough signal in the image for each channel does the trick.

I haven’t tried yet ColorBlendModes.jl, and my suggestion to define a new image by a convex combination of the two images was influenced by Python, where a RGB color is an array, [r, g, b], with r, g, b ∈ [0,1]. A convex combination of two colors is also a color.
Since your images, img1, img2 are of type RGBA{Float64}, you can add them simply, but I don’t know how can you interpret this operation from the point of view of images (not arrays).
The convex combination for α-compositing is the right one, but it is non-trivial, for α₁ in (0,1] and α₂ in (0,1).
Example:

using Images
img1 = load("foregr500.png")
img2 = load("backgr500.png")                    
@assert size(img1)==size(img2)
α₂= 0.3  # the parameter for the background image
α₁ = 1  # parameter for the foreground image 
#output_alpha= α₂+α₁*(1-α₂)=0.3+0.7=1
cimg = (α₂ * img2 .+ α₁*(1-α₂)* img1)/(α₂+α₁*(1-α₂))
#i.e. cimg = 0.3*img2 .+ 0.7*img1

foregr500

backgr500

Their α-composition:

alpha_blended

The two images img1, img2 are derived from the netCDF files, provided here: https://zenodo.org/record/4759091/files/GF_FESOM2_testdata.tar.gz, and read with xarray.
LE: Note that α₁, α₂ do not represent the α-parameter of the involved RGBA images. Here both img1, img2 have the type:

typeof(img1), typeof(img2)
        
(Matrix{RGB{N0f8}}, Matrix{RGB{N0f8}})

Rather, these parameters define the proportion of contribution of each image to the output image.
In the above example, α₁*(1-α₂)/(α₂+α₁*(1-α₂)), α₂/(α₂+α₁*(1-α₂)) give the contribution of img1, respectively img2 to the output image, cimg.

1 Like