Makie based data labeling setup


Hi all, I’m trying to create a Julia-based image labeling solution for downstream ML (ideally Flux) transfer learning. I didn’t see anything that obviously fit, so decided to try and build my own. I am very impressed with Makie, and the interactive mouse examples are almost perfect, but there are a few glitches in my implementation: when I create the scene using the tif of interest (large 2160 x 2560 RGB of cell nuclei) I can label and record the points behind the image, but not on top of the image. I’m guessing I need to pull the scatter! to the foreground somehow, but going through the Makie documentation I’m struggling a little understanding how overlaying works (if that is the right way to describe it). Any help would be appreciated, example code below and example makie output window below that. Thank you! -Andrew

# load test image
img = load("data/test.tif")

scene =hbox(image(img))

clicks = Node(Point2f0[(0,0)])

on( do buttons
    if ispressed(scene, Mouse.left)
        pos = to_world(scene, Point2f0([]))
        push!(clicks, push!(clicks[], pos))
scatter!(scene, clicks, color = :red, marker = '+', markersize = 0.1)
RecordEvents(scene, "nuclei_picking")


I’m not sure how you got the idea of using hbox that way, which seems to be the main problem here :wink:
I’d recommend writing the code like this:

# load test image

using Makie, Colors, FixedPointNumbers
img = rand(RGB{N0f8}, 1000, 1000)

scene = Scene(resolution = (1000, 1000))
scene = image!(img);

clicks = Node(Point2f0[(0,0)])

on( do buttons
    if ispressed(scene, Mouse.left)
        pos = AbstractPlotting.mouseposition(scene)
        push!(clicks, push!(clicks[], pos))
scatter!(scene, clicks, color = :red, marker = '+', markersize = 20, show_axis = false)
RecordEvents(scene, "nuclei_picking")


Haha, desperation breeds creative hbox usage? Not sure how I strayed the wrong way on the example, but this definitely solved my problem, thank you so much Simon! -Andrew


happens I guess :wink: If you can pinpoint a particular example that confused you let me know, so that we can update it!
hbox is there to compose multiple plots, so using a single plot as an argument isn’t working very well.
I should just make that an error!


This is really cool and I can see myself using something like this for my research. Great idea.


Glad you like the idea Alejandro! My thoughts exactly :slight_smile:

I’ve been playing around with this on a private repo but I’ll try to put something public together asap in case it helps others (forks/pull requests/brainstorming features). The motivation was actually using DeepLabCut’s python-based training UI, which was surprisingly limited, and then discovering that Makie had advanced a lot since I last checked it out (thanks again Simon!).


I’ll be glad to help if you make this public. I’m also interested in knowing a little more about your downstream analysis. In my case, I take pictures of tiny egg masses that have been stained and we count the eggs on the screen. I’m loading the image on GIMP and adding different points on each egg, depending if they eclosed or not. After that, I load the layer with just the points in Julia and read how many of each color are there. It works ok, but it isn’t very flexible. I can see something like what you are doing to be more flexible and will allow me to do everything in one environment. Anyway, hopefully we can discuss more about it.


Very cool, C. elegans eggs? I have a similar workflow I’m planning on, with a (hopefully) simple dapi-based nuclei counting approach and a (likely much more complicated) cell-type determining neural net with transfer learning approach that takes in different channels from confocal images.

In previous efforts I used Images.jl to perform blob detection using dapi alone then measured an arbitrary area surrounding the center of the blob/nuclei in the other channels to determine cell type based upon markers. This was surprisingly robust, but I was intrigued by U-Net and I know Flux.jl will be able to eventually allow us to replicate some of that functionality in Julia (there is already a pull request for a SkipConnection layer: Ultimately, I think light training using Makie + Flux.jl with a U-Net architecture is probably the way forward.

What do you think? Definitely let me know if I’m overlooking any Julia packages already in existence that we can use to build upon, and like I said, will try to get a github package going in the near future!


No, I work with a non-model organism for pest management. These are eggs that are laid almost like shingles in a roof but there’s not an easy blob to detect.

This is an very easy example. I haven’t spent much time lately, but I did try a lot of image manipulation strategies to see if I could get the contours of the eggs, but I wasn’t successful. Since I have technicians that can help, I settled for a fairly manual and involved protocol, but would love something that make things easier, specially if I can do everything in Julia. Anyway, I’m far from an expert in these field, but I’m ready and willing to learn more.


Even cooler! I appreciate the challenges of working with anything other than yeast/worms/zebrafish/mice, and it does look like you have quite the challenge there. It’s a good thing you have technicians to help, but this is exactly why we need a flexible approach that can be tailored to our needs. Somewhat naively, I think a proper transfer learning + a CNN/ResNet/U-Net architecture could work on this? Will be fun to figure it out!


Hey Alejandro,

Just wanted to let you know I put together a basic Makie.jl-based image annotation package!

It is very much a work in progress, but I wanted to get something public sooner than later. It also has some significant shortcomings, I think all stemming from my side and not Makie.jl, so certainly treat it as a prototype. Please fork/comment/open issues!

(also, thank you again Simon! Of course this heavily borrows from the interaction examples in the Makie.jl package, I’m very indebted)