We use N0f8
to store image data in default.
But I wonder if this does correctly.
using Images
# my custom function
function down_intensity!(img::Array{Gray{T}}, bit) where T
gray_level = range(0, stop = 1, length= Int(2^ bit +1))
inds = [gray_level[i] .< img .<= gray_level[i+1]
for i in 1:length(gray_level)-1]
# This for-loop is slow
for i in 1:length(gray_level)-1
img[inds[i]] = fill(gray_level[i],length(findall(inds[i])))
end
img = Gray{T}.(img/maximum(img))
return img
end
function down_intensity(img::Array{Gray{T}}, bit) where T
dest_img = copy(img)
down_intensity!(dest_img, bit)
end
graybar = Gray{Float32}.(transpose(repeat(range(0,stop=1,length=512),1,50)))
bits = 4:-1:1
imgs_builtin = [Gray{N0f8}.(Gray{Normed{UInt8, bit}}.(graybar)) for bit in bits]
imgs_custom = [down_intensity(graybar, bit) for bit in bits]
img_diffs = colorview(Gray,hcat(vcat(imgs_builtin...),vcat(imgs_custom...)))
The left column shows results of the builtin way, the right column shows that of my way. As you can see, the left column is not spanned uniformly; there’s more on the center, less on both sides.
Update:
I find some note saying pixel at i, j
is a sensor that integrates the intensity over an area spanning i±0.5, j±0.5
(this is a good model of how a camera pixel actually works). In this opinion, N0f8
seems reasonable.