Actually I do work with 3d data, i think it is quite easy to distinct MxNx3 where M,N >> 3 from
3d data. Anyway I am not religious, openCV also went that way of array element = pixel , and I have written
the transformation to and from representations many time.
That is how we used to do it, but we got bug reports because of it. What happens when you’ve snipped out a thin slab of an image that happens to have 3 slices in it?
I still think it is needed to support array of float/int/etc naturally … when I research new ideas I almost always
work in single plane images, and on a last step check if color information added any value.
Oh, I agree. The color interpretation should only be for visualization; all of the algorithms in Images.jl should work for arrays of numbers. If not, please do file an issue.
EDIT: but of course, it’s still “one array element = one pixel”. That rule won’t change.
Performance…
Interesting. Performance does seem to depend on the contents of the image, but I’m still getting better performance in Julia than Matlab. Here’s how I generated some fake data:
julia> img = falses(1200,1600);
julia> function makebox!(img, cntr, w=5)
inds = (cntr[1]-w:cntr[1]+w, cntr[2]-w:cntr[2]+w)
inds = map(intersect, indices(img), inds)
img[inds...] = true
end
makebox! (generic function with 2 methods)
julia> for i = 1:20; makebox!(img, (rand(1:size(img,1)), rand(1:size(img,2)))); end
Even if I first do imgu = 0xff*img
and then run @benchmark label_components($imgu .> 250)
, for me we’re still faster than Matlab (though not by as large a factor). Out of curiosity, how many cores do you have? If Matlab is multithreaded (we are not) and if you have many more cores than I do (I have only 2), then maybe that could explain it?