Is there a good way to run an interpreted IJulia session?

Hello all,

I know about JuliaInterpreter and things like Revise to avoid recompilation for package development and DaemonMode to achieve a similar end for script development, but a use case where time-to-first-plot has been especially frustrating for me has been in a Jupyter or a Pluto notebook. Here, I’m trying things out in an exploratory/learning setting, running little chunks of code (not a script) and I don’t want/need the overhead of setting up a package.

It’s these times where I feel most frustration compared to Python. I figure an easy solution would be to just interpret rather than compile for these settings. I see that JuliaInterpreter has an @interpret macro, but it obviously gets old typing that on every line. Is there a way for an ijulia kernel to interpret everything by default? I imagine this wouldn’t be too hard to implement, but I’m wondering if it already exists or if this would be unusably slow or something.

1 Like

I’m not sure if the following things are new to you or if they work:

  • My option of choice would be to use PackageCompiler.jl, see for example this talk which also gives more information on how in general speed-up package loading PackageCompiler and Static Compilation - YouTube
  • Another option which is closer to the idea of just @interpret everything would be to install a new Julia kernel via installkernel(“Julia nodeps”, “--compile=min --optimize=0”). This will reduce the amount of compilation.

My experience with PackageCompiler

Yes, I’m aware of PackageCompiler and sysimages, and I’ve given it a good try, but it’s still a pain. I thought the VS Code extension would save the day by making it easy to generate a SysImage for a given environment, but it breaks for an incremental build, meaning I have to redo the entire build every time I want to add a package. And that’s a real downer, given that the compilation takes so long (~10-15 minutes for me). Even if all of this were working seamlessly, it’s still a couple more steps than what I have to do in Python–just add a package and start using it interactively.

Trying interpreted Julia for interactive use

TLDR: doesn’t look good :cry:

I just tried the --compile=min --optimize=0 suggestion, entered using GLMakie and right off the bat I needed to wait to precompile. I tried a fresh environment instead, ] add GLMakie. One hour later, precompilation was still not done.

I tried --compile=no and that seemed to be incompatible with precompiled packages, spitting out a bunch of code missing for ... errors. So I tried a fresh environment, ] add Plots and precompilation took nearly 15 minutes. Finally, I tried plot([1, 2, 3]) and after still getting a whole bunch of code missing complaints, the plot window shows up about three seconds later—not horrible, but still a bit slow for interactive use. This seems to be about the same speed as running @interpret plot([1, 2, 3]). So I guess that means this works, but with code missing complaints and a bit slow.

Actually, looks like running Julia normally (with JIT), pulling up a plot window (post-compilation) takes about 2 seconds. So I guess the slowness is due to Plots rather than the interpreter. I tried using GLMakie instead, which does pull up the window instantly after being JIT-compiled. So how fast does it do that when interpreted? It’s been a minute and the window is not responding, with no plot showing up. Same thing for --compile=no, --compile=min, and @interpret. Basically, looks like my idea won’t do much good if basic packages like Makie are unusably slow/broken when interpreted :cry:

1 Like

You could avoid this with Pkg.pin, e.g. precompile those packages which take time (usually the plotting package) and then pin the version. This way you don’t need to recompile each time…

I may be wrong, but if I remember right the problem wasn’t that adding a package changed the versions of previously compiled packages. Maybe it is, though.