Introduction
I’m currently developing a suite of deep learning computer vision tutorials, with a special emphasis on medical imaging. These tutorials are designed as interactive Pluto notebooks that you can use directly for training and model inference. However, this project is still in its early stages, and no single tutorial is complete yet. I’m looking for feedback to ensure I’m heading in the right direction. I am seeking as much feedback from the Julia community as early as possible to help direct this “package” development. This being said I have a very clear idea of what I want this to look like (see shadcn/ui below).
Workflow
The intended workflow is straightforward: you begin with a pre-made tutorial related to your task, such as one on image segmentation, then customize your training pipeline by choosing and integrating different components (like integrating Augmentor.jl versus DataAugmentation.jl for the data augmentation part of the pipeline). These nuances will be demonstrated in the component-specific notebooks. Eventually, you’ll have a custom Pluto notebook, resulting in a personalized training and inference tool that offers full transparency of the underlying processes.
This is built using our new tool, Glass Notebook, which integrates with GitHub to publish Pluto notebooks as interactive websites. This includes features like integrating package documentation directly into the tutorials, as seen with ComputerVisionTutorials.jl.
Feedback
Before I proceed further, I would greatly appreciate some feedback on this tutorial repository: (1) Dive into the core packages and ComputerVisionTutorials.jl itself, and share your ideas regarding tutorials and components. (2) Try out Glass Notebook if you’re interested, and consider using Glass for your package documentation, especially if it makes sense to integrate your package into ComputerVisionTutorials.jl. (3) I’m very open to collaborations through suggestions, PRs, issues, etc.
Aside:
For anyone interested, a bit of inspiration for a “package” like this comes from the highly useful UI component “package” in web development called shadcn/ui. I believe a copy-and-paste workflow like this for deep learning could be surprisingly useful.