I have a color image represented by an array of UInt8 of size (58638, 22376, 3) that I want to display using imshow from the ImageView package. The RGB channels are contained in the third dimension of the array. I understand that for representing color images in Julia there is an RGB type and also a N0f8 type for value between 0 and 255 which seems to be what I’m looking for.
I tried using colorview to change the type of my array to N0f8 but that didn’t work.
What is the easiest way to display my array as a color image? (Why the imshow function can’t just take my array and understand that if it has three dimensions, it is a color image, and display it directly?)
ImageView.jl was largely written by a neuroscientist, so it’s written to work well with 3D magnetic resonance images. Julia’s image array convention puts each color channel into a single structure so each pixel’s values are adjacent in memory, rather than spaced out along the third dimension as in your image. To convert between these conventions, you can use permutedims to re-order the image’s axes, or PermutedDimsArray if you want to re-order ‘lazily’ (by recalculating indices on the fly rather than copying). There’s a good example in the docs, but briefly:
Why the imshow function can’t just take my array and understand that if it has three dimensions, it is a color image, and display it directly?
Try answering these questions:
What is the value range of an image represented by Array{Float64, 3}? (I’ve seen images with values from 0:255, 0:1, -1:-1, or 0:65536, all of which would fit in a Float64)
What is the color space of Array{Float64, 3}? (RGB is not the only 3-channel color space)
Does array of shape (H, W, 3) represent one 3-channel image of size H×W or three 1-channel images of size H×W?
Assuming an array of shape (3, 3, 3), which dimension corresponds to the image channels?
I believe you can spot ambiguities. A way to reduce them is to introduce types, which is what the Julia image ecosystem did.
Ok, thank you, it works !
Indeed, my image is big and the performance is not good. Is there a package to display very big images with good performance when zooming and panning? (Should I create another post with this question?)
Ok, thank you, I understand the issue here. I guess it is a balance between user-friendliness and robustness of the program. And maybe I’m just too used to using Matlab which is very user-friendly.
It’s not unreasonable to expect that a function made to deal with images interprets a MxNx3 uint8 array as an image. For example in GMT this will plot you an image with random colors
imshow(rand(UInt8,64,64,3))
But this makes ofc many assumptions, namely that the image layout is band interleaved.
The problem of the image size is a very different thing. No software will show you all of those pixels, even if it, and the machine were it runs, can hold them all in memory. The solution here is to only load the chunk that is needed when doing a zoom, and to load only a decimated version of the full image when displaying it all. The GeoTIFF format lets us have an image with pyramid layers and it’s up to the software/users to decide what layer to load. I think ArcgGDAL can deal with situations like this (GMT can too but it’s not ironed up in this regard to deal with it smoothly).
You’re pushing up against the limits of a lot of hardware here - your image is ~4 GB, and if those values are converted internally to Float32 (as is the case for many GPU-based plotting libraries), that’ll take you to ~16 GB, which is larger than the memory of almost all consumer GPUs. GLMakie may be capable of doing this in principle (see here), but I don’t know about in practice on your hardware - mine was unable to handle it. @sdanisch ?
I was thinking of a solution like what Matlab is doing to plot images. Matlab only takes the pixels needed for the plot at the current zoom, by decimating the image. So there is no interpolation, but it’s very fast and there is nothing more loaded in RAM or on GPU memory.
Using OpenGL based plotting systems, I was limited by the maximum texture size ~(32000, 32000) on my GPU, so the solution (in your link) of tiling textures seems to be a nice workaround. (On the computer I use to process those big rasters, there is a GPU with 24Go of memory.)