"Kernel Queue" for faster Julia (Jupyter) Workflows

Many Julia users are a bit annoyed by the slow warm-up time. Of course, adapting workflows and getting used to it is not so hard, but to me after a few (not-so-intense) Julia years, it’s still somewhat a annoying. Compared to Python for example it really feels awkwardly slow when you want to quickly do some plots and calculations and I’d really like to stick to Julia.

I think I have an idea and sorry that I don’t directly post a prototype implementation but I currently have a hard time writing up my PhD thesis :see_no_evil:

I think we can get around this problem easily by having a – let’s name it – “kernel queue”. What I mean by that is that there is always a Julia process ready to be attached to in your working environment and when you connect to it, it immediately prepares and fires up a new one with a default configuration. This creates a small memory overhead which I’d consider negligible.

This idea might sound a bit dirty, but I think it could even be used to solve a yet another small issue: default template for Jupyter notebooks.
When I work with Jupyter notebooks (no matter if Python, Julia, Haskell or whatever), I mostly do the same thing, something like:

%matplotlib inline
import matplotlib.pyplt as plt
import numpy as np
import numba as nb
import numexpr as ne

or

using Plots
using StatsBase
using LinearAlgebra
using BenchmarkTools

etc.

I’d like to have a button in my Jupyter GUI, where I can load such a default notebook where this is already in the first cell and executed in an already available kernel so that I immediately can start to work.
I know that at least the import part is doable with startup files, but those are not visible in a notebook and implicit imports/mechanism will cause problems when sharing them with others, so I prefer to have them explicitly in a cell… and of course, the kernel needs to be launched first.

For Julia, I would even go further and define an environment with a set of additional “warm-up commands”, like quickly creating and discarding a histogram() a plot() or whatever, so that those functions are already precompiled. Whatever the users often does (a tiny minimisation procedure to get Ipopt with JuMP ready etc.)…

I think that a first quick and dirty implementation could be realised with some kind of a kernel hack for Jupyter, but maybe you have a better idea.

What do you think? Do you think that precompilation will soon be ready and satisfyingly fast or is such an experiment worth to implement?

I’ll try to spend a bit of my currently very limited time on Jupyter mechanics and I hope to present a small prototype.

2 Likes

Juno has had that for a couple of years, and it’s been working (mostly) fine. You’ll have to take some care to not precompile packages from multiple processes at the same time.

4 Likes

Ah ok, I’ll have a look, thanks.