First off, thanks for all of the suggestions. I’ll report back later after I’ve had time to try everyone’s suggestions. In the meantime, I’ll answer some questions, which will give everyone a better background.
Big picture: I’m trying to show what hyperspectral images look like under different illuminants (e.g. daylight vs LED light bulb). Instead of 3 channels corresponding to red, green, and blue, hyperspectral images might have 31 channels, corresponding to 400 through 700nm wavelengths, in steps of 10nm.
Why hyperspectral? Metamerism: two things whose color looks identical, despite reflecting completely different wavelengths. For example, object A might reflect 610nm (orange), whereas object B reflects a mixture of yellow (580nm) and red (640nm), yet both look orange. For example, the objects on the left are rendered under a full spectrum, whereas the objects on the right have a 610nm illuminant. In this case, metamerism fails, as the mixture of yellow and red looks dark gray instead of orange.
nsajko: That’s just a snippet that is part of larger function. Below is an outline of what the function does:
function processHyperSpectral( images, CMF, Illuminant)
# images is an array of 31 hyperspectral images (e.g. 400 nm, 410nm…700nm)
# CMF is a color matching function, such as CIE1931. 3 (XYZ) x 31 (nm)
# illuminant is an array of the spectrum of a light source (daylight, incandecent bulb, etc.)
1) turn the CMF into matrix of floats (was an array of xyz)
2) make separate images layers representing X,Y,and Z
3) precalculate the illuminant * CMF (color matching function adapted to the light source).
4) go through each of the 31 hyperspectral images and multiply them by the precalculated CMF
This is summed into separate X,Y, and Z images.
5) Normalize the X, Y, and Z images
6) Transform the X, Y, and Z images to separate R, G, and B images (CURRENT ISSUE)
7) Perform sVGA gamma correction
8) Fuse the 3 images into a single RGB image
Dan: 1) No, I did not try each index, because I already knew the horizontal and vertical dimensions. Earlier iterations used:
for x in 1:horiz
for y in 1:vert
… which was slower than the Cartersian method.
- I’m not sure what you mean by h1, h2, and h3. Please elaborate.
bertschi: Yes, there is a historical reason, but it might work better if I take your suggestion and
go to the next step and set it up as a matrix rather than an array of smaller matrices.
The original code leaned heavily on colors.jl. The constant battle between types
(Gray, RGB, and XYZ) was not only a pain-in-the-ass, but made arrays/matrices
and looping for efficient coding all but impossible. For example, and earlier version
looked like this:
for x in 1:horiz
for y in 1:vert
redLayer[x,y] = (3.2404542 * Xlayer[x,y]) + (-1.5371385 * Ylayer[x,y]) + (-0.4985314* Zlayer[x,y])
greenLayer[x,y] = (-0.969266 * Xlayer[x,y]) + (1.8760108 * Ylayer[x,y]) + (0.041556 * Zlayer[x,y])
blueLayer[x,y] = (0.0556434 * Xlayer[x,y]) + (-0.2040259 * Ylayer[x,y]) + (1.0572252 * Zlayer[x,y])
end
end
…because the style of the codebase encouraged (forced?) treating red, green, and blue (or x, y, and z) as separate entities, rather than looping through colors 1,2, and 3. So my first course of action was to try to rip that out, and have a separate array entries for each color (later, during gamma correction, I do loop through the colors). The next step ended up looking like this:
for x in 1:horiz
for y in 1:vert
colorLayers[1][x,y] = (3.2404542 * Xlayer[x,y]) + (-1.5371385 * Ylayer[x,y]) + (-0.4985314* Zlayer[x,y])
colorLayers[2][x,y] = (-0.969266 * Xlayer[x,y]) + (1.8760108 * Ylayer[x,y]) + (0.041556 * Zlayer[x,y])
colorLayers[3][x,y] = (0.0556434 * Xlayer[x,y]) + (-0.2040259 * Ylayer[x,y]) + (1.0572252 * Zlayer[x,y])
end
end
In addition, all of the typecasting and stuff really slowed down my code when I just needed to manipulate arrays/matrices of floats.
So, by first translating them to floats, doing my several steps of manipulations, and then translating back to RGB types at the last minute, ended up yielding much faster code.