If you’re very used to working within IDEs my approach might not win you over, but I’ll offer it anyway. I’m a lot more comfortable working "CLI"ey. This explains my motivation well
(admittedly if you do any I/O at all then there’s external state that might change your program, but as far as your imports go it’s deterministic)
The first thing I do is put this as my shebang
#!/usr/bin/env -S julia --project=.
And keep in my head that ./script.jl is always my entry point. Working this way pushes against using debuggers and towards using logging.
I am much more experienced debugging with prints than debuggers, so there’s some bias here, but I think it’s better! You can save the output and share or compare it. I’ve isolated some deep deep numerical bugs with little knowledge my libraries simply by adding prints and turning log levels way up and then diffing versions of my output until I pinned down where the versions diverged. I don’t think I could have done that with a debugger; it would have, at least, taken much much longer.
You must have the habit from matlab of using that every line that doesn’t end in ; prints its value. In Julia, the REPL does that by wrapping an implicit display() wrapped around every top level value it, and I imagine that habit is what’s keeping you working in the REPL in VSCode. But you can just re-add that feature in the non-interactive case!
So I would write e.g.
for i = 1:10
A = 3*M[i]^2 + 4*M[i]
display(A) # DEBUG
end
I know this “pollutes” your code too, but I think it’s worth the trade off, and that’s why I tag them “DEBUG” so that I can either search-and-replace them out later or know to filter them out mentally while I skimming code.
@infiltrate (or breakpoint() in python) is useful for me when I need to figure out how to write an expression in the middle of some deeply nested code that I’ve already written. But even there as soon as I figure it out I display() (or print(repr()) in python) it so that as I test my code I can see if my new expression is working.
The main draw of Jupyter/Pluto is that they also add implicit display()s in a file you can share. I kind of wish there was a way to turn that on in the Julia CLI so that julia script.jl behaved like matlab. Of course the other advantage is that Jupyter’s display() is augmented to show interactive graphs and tables which is something the CLI will never be able to do.
Debugging with print only works if your startup time isn’t that long, which is what motivated me to post this thread. Looking at the Related threads, I’m not the only one who is coming in with the CLI instinct and running into e.g. especially the relatively long time-to-first-plot in Plots.jl and wondering what we’ve been doing wrong.
What I’m hearing is that people lean mostly on not restarting Julia, by using Pluto or the Julia process vscode spawns or Revise.
I think adapting my $ python -i script.py habit to julia> include("script.jl") splits the difference well enough, though as Nathaniel mentioned in the thread I linked it’s not 100% clean because you could e.g. mess up some internal setting inside of Plots that can only be cleared by a full restart; but at least this way a full restart is reliable. Since I am always aiming at getting my final results from ./script.jl I make sure to test that periodically, and during dev include("script.jl") is 99% the same. There’s no extra commands that need running or environment variables that need setting or virtualenvs that need loading or code servers that need launching.