Julia developing workflow with vim and terminal

Dear All,

I edit files using vim, and run Julia in a terminal split, next to my editor. I am sure I won’t be the first one who adopts this setup.

My workflow is currently the following:

  • I edit a driver.jl file that starts with module driver
  • I move to Julia in the other terminal split, and run include("driver.jl")

I am interested in what people with a similar setup do when developing. I am really new to Julia, and this is what I find awkward:

  • When prototyping, I continuously have to issue the display() command, to see the output of my computations; this is presumably because I am using module. A similar thing is said for plots and figures.
  • If I don’t wrap my code in a module, I still have old variables in the cache (I don’t know how to clean variables) between runs.

I would be generally interested in what people do to address the points above, but more generally what is their workflow.

Thank you!

Have you checked out Modern Julia Workflows?

Thanks, that’s useful @Yuan-Ru-Lin, what workflow do you use? Note I don’t use VSCode, but another editor

I’ve used both vim and Emacs along with a terminal window next to the editor. My workflow looks like:

  1. Use Revise.jl which keeps track of changes in your code and reloads it for you.
  2. Always develop code as part of a package. I usually use PkgTemplates.jl to initialize a package from a template I have saved. This helps you with setting up documentation, git, CI etc. You can also go into the Pkg mode and type generate Foo which quickly generates a barebones package if you don’t need the bells and whistles of PkgTemplates.jl.
  3. For complex functions, make a function within your package to test it and initialize all parameters within the function. This avoids cluttering the global environment with variables.

This allows you to immediately run functions from the code you’re developing without using include or display.

7 Likes

I use NeoVim and plain REPL. When I prototype, I start by opening REPL and working on a PoC. If I could make it work, I would copy the code I typed in the REPL, create a script by calling edit("script.jl") in the REPL and then paste the code in the script file (rudimentary, I know!). [1]

What happens next depends on how winding the snippet is. If it’s really twisted, then I would stay in the editor and try simplifying it. If it’s simple, then I just close the file, get back to REPL and continue to get more PoC working.

At some point, I would start having functions and a set of calls to those functions; that’s when I split the file into something like lib.jl and script.jl. With these two files, I could start another session of work by

  1. loading functions by includet("lib.jl"),
  2. loading calls by include("script.jl"),
  3. continue to work in REPL.

Sometimes, when I want to refactor code, I would run

entr(["lib.jl"]) do
    # some calls yanked from script.jl [1]
    # possibly prefixed by some `@time` to catch degradation
end

and then proceed to modify lib.jl and see if anything goes bad.

I rarely wrap my code in a module unless I want to make it a package. Surely, variables that would have been kept local in a module would hit me sometimes, but having code factored out as above means it’s common (but easy) to reload everything so bugs will be caught early.


[1] For the record, I have found lemonade incredibly useful. I could yank lines from a file on a remote machine with "+y and be able to paste them on my local machine with "+p (and vice versa) as if they were the same machine. This is especially important since I work mostly on remote machines in a HPC system.


EDIT1: add a footnote

4 Likes

Thank you both @Yuan-Ru-Lin and @Eliassj this is very helpful!

2 Likes

I would just add that I also use neovim + Tmux and slime is quite useful for highlighting a snippet and executing it directly in the REPL. It works with a bunch of other multiplexers (screen is the default). super fingers which is tmux specific afaik, is also very useful for jumping directly to an error in a stacktrace.

3 Likes

I would add, as a vim user, that VSCode has as vim emulation plugin which, although not perfect, is reasonable enough, and for me justified the use of VScode with all it’s other features.

I still use vim in various situations, but a good IDE can really make our life easier most of the time.

2 Likes

Check out iron.nvim plugin for interactive REPL inside neovim.

I primarily write code in vscode with the aforementioned vim emulator plugin, and then and iteratively test the functions as I build them out in the REPL.

As for the terminal side of things, I couldn’t find anything to give me a vim-like experience while in the REPL which led me to write VimBindings.jl, which gives you the vim basics in the REPL. It not a complete vim implementation but it covers enough for me to use day to day.

Thank you Pedro, this seems very useful once one knows what to do with the REPL. I guess my main point here is that using the REPL with the workflow I have forces me to litter the file containing the main portion of the code with ‘display’. So my question to you is: how do you normally visualise things in the REPL? You give up using modules?

Thank you @caleb-allen. It would help a lot if you could elaborate on your statement ‘and then iteratively test the functions as I build them out in the REPL’. Say you have just added the line line y = exp(x) on the following file, named simple.jl

module simple
x = 0
y = exp(x)
end

You have the REPL open. What do you do to check that y has been correctly assigned the value 1.0? One way to do that is to add the extra line display(y) and then invoke include("simple.jl"). This solution forces me to litter the code with all the ‘display’ statements.

Another one is to yank the two inner lines, and send them to the REPL. This to me is also a bit strange, because I feel I have little control on what is stored in the REPL (maybe I am overwriting a variable x that will be important in one of the later evaluations I have).

I know these are very elementary questions, and may be dictated by my being a Matlab user transitioning to Julia, but either of these workflows seem slightly unsatisfactory. Do you go for the second option? Is this what you do, or am I missing something?

No worries, these are all great questions and certainly ones which are best to have answered at the beginning of your Julia journey, it’s a wonderful paradigm that I think you will enjoy. And regarding your experience so far, I certainly agree that littering the code with display[1] is overly tedious, hopefully the information below can help you get to a smoother workflow.

It would help a lot if you could elaborate on your statement ‘and then iteratively test the functions as I build them out in the REPL’.

Of course. And before I get going, I’ll say that this discussion is as much about how your code is organized (as files and modules) as it is about the REPL itself; the REPL is the place where evaluation actually happens, and the variety of workflows people have usually fall out from that fact. In other words, despite the many different approaches you’ve already seen (and will inevitably see), you will find that they almost always boil down to some variation of “edit there, evaluate here” where “there” is a text editor of some sort and “here” is the REPL.

Say you have just added the line line y = exp(x) on the following file, named simple.jl

module simple
x = 0
y = exp(x)
end

You have the REPL open. What do you do to check that y has been correctly assigned the value 1.0? One way to do that is to add the extra line display(y) and then invoke include("simple.jl"). This solution forces me to litter the code with all the ‘display’ statements.

Modules are a useful abstraction to aggregate related features under one umbrella, but what you likely want here is a function:

# simple.jl
function simple()
    x = 0
    y = exp(x)
end

Then, from the REPL, you can call the function simple directly; the function is evaluated and its last expression is returned, in this case y = exp(x).

julia> include("simple.jl")
simple (generic function with 1 method)

julia> simple()
1.0

Let’s say you want to edit the function in the file, for example changing x = 5:

# simple.jl
function simple()
    x = 5
    y = exp(x)
end

Once that is saved, include the file again and the changes are applied:

julia> include("simple.jl")
simple (generic function with 1 method)

julia> simple()
148.4131591025766

From this you can build significant complexity while interactively checking the functions one by one, writing new functions which use your previously defined functions, etc. This is the essence of the REPL workflow.

Lastly, it can become tedious to call include over and over again, and this is where the package Revise comes in. When you use Revise’s alternate function includet instead of the built-in include, the REPL actually determines which files have changed and automatically includes them. Then, your workflow is to edit and save the functions, and the next time you run them the REPL uses your newest version, no loading necessary.


  1. If you find yourself needing to do this in large functions, check out Julia’s @show x, it’s very convenient to place wherever you want without altering an expression’s behavior ↩︎

This is wonderful, I get it, thank you so much @caleb-allen!

1 Like

Exactly, I concur with @caleb-allen that includet("simple.jl") is a very useful framework, even more useful when you have a module that includes many files which you modify.

I would go as far as saying that developing large codebase without Revise is quite cumbersome but that’s my point of vue.

1 Like

Thank you @rveltz this is much appreciated.