Interactive prototyping workflows

Hi all,

I’m writing data analysis scripts. Their form is usually like:


using Plots, XLSX, DataFrames

# load some data

# drop mangled parts

# calculate some statistics

# plot some stuff
plot(x, y,
  title="....",
  xlabel="...",
  ylabel="...")
png("savefig.png")

I’m wondering what people do as they’re developing analysis scripts. What are your workflows that work for you?

Workflow 1

I currently intentionally do everything in Main. My scripts and their outputs are meant to be read by the humans on my teams, they’re not components in larger apps, and the extra nesting would be distracting. And doing everything in Main plays nice with Literate.jl (I don’t think Literate even works without working in Main, since it needs to intercept all the implicit display() calls, right?) when I’m ready to start publishing results.

I’m coming from python and there I make heavy use of python -i. There, I also write everything in the global scope, run it with python -i, and then I have my datasets in the REPL to explore. When I write a line I like, I copy it into my script and re-run it.

A big advantage of this workflow in my experience is that the code is precisely what it says. I can hand my script to someone else and they will be able to run it. There’s no tricky IDE-specific dependencies that can creep in because there’s only one entry point.

Julia also has -i which is great. My workflow translates directly.

But this workflow is kind of slow because it has to recompile everything from scratch every time I run my scripts. Python has slow start up too but Julia is quickly galloping ahead the more libraries I pull in, and the wait between iterations breaks my flow.

Workflow 2

I know about Revise but it doesn’t update variables, which is a big problem when my work is analysing data. Naively, that would force me to change the way I write my analyses, maybe pushing my towards a functional style where literally every step has a function. That would be clean, but sort of unnatural for the sorts of semi-interactive explainer/exploratory work I’m trying to do (again: with working teams. I can’t code anything far out that needs an appreciation for the y-combinator).

The best I’ve found is this tip: (I don’t have link rights yet but this is the citation: hxxps://discourse.julialang.org/t/critique-my-workflow-for-small-models-in-julia/105089):

# projet.jl

function main()
    @eval Main begin
        df = DataFrame(XLSX.readtable("data.xlsx", "pages"))

        HIVER = [1, 2, 3, 4]
        ÉTÉ = [5, 6, 7, 8]
        AUTOMNE = [9, 10, 11, 12]
    end
end

main()

This, cleverly, runs the code in main() in the Main scope instead of in its own local scope.

To work with it I have to launch it like:

julia> using Revise

julia> includet("projet.jl")

julia> HIVER
4-element Vector{Int64}:
 1
 2
 3
 4

I don’t get fully automatic updates when I change code, but I can rerun main() without losing my Julia session. For example, suppose I decide December should count as winter; I just edit it in my editor

# projet.jl

function main()
    @eval Main begin
        df = DataFrame(XLSX.readtable("data.xlsx", "pages"))

        HIVER = [12, 1, 2, 3]
        ÉTÉ = [4, 5, 6, 7, 8]
        AUTOMNE = [9, 10, 11]
    end
end

main()

And it’s a snap to rerun it because I save all the time spent compiling library functions.

julia> main()

julia> HIVER
4-element Vector{Int64}:
 12
  1
  2
  3

I like this because:

  • it’s fast

I don’t like:

  • that I have to adjust how I launch the code. It’s extra cognitive load on me and on teaching others
  • that I have to adulterate the code
  • that it doesn’t work with Literate.jl

Workflow 3

Literate.jl suggests this build harness when working with Revise:

using Revise
using Literate
        
entr(["analyse.jl"]) do
    try
        Literate.markdown("analyse.jl", "build", credit=false, execute=true, flavor=Literate.CommonMarkFlavor())
    catch e
        @warn "build failed:" exception = (e, catch_backtrace())
    end
end

This is fine but it’s getting pretty far from the simplicity of julia -i.

I like this because:

  • it’s fast

I don’t like:

  • it uses a different different entrypoint than normal, which makes me worry about the risk of a dev/prod gap
    • though in the case of Literate, there already is a different entrypoint
    • and if I need to examine variables I can @bp to drop in.
    • but it feels a bit awkward to me.

Workflow 4

I experimented with PackageCompiler.jl:

julia -e 'using PackageCompiler; create_sysimage(["Plots", "XLSX", "Distributions", "Statistics", "DataFrames", "Literate", "MarkdownTables"], sysimage_path="sys.so", precompile_execution_file="precompile.jl")'

Then iterate with:

julia -J sys.so -i projet.jl

I like this because:

  • I use the same, unadulterated entrypoint as I would by default. There’s no need to adjust my code at all to play nice with the dev environment and it should run identically for anyone I share it with.

I don’t like this because:

  • it (potentially) requires a new sysimage for each project
  • building a sysimage is REALLY heavy; it takes something like 2GB of RAM and over half an hour on my machine. Which especially makes it hard to iterate on it even though:
  • getting precompile.jl right is tricky

This is something I would invest in for deploying to a cluster to crunch some numbers, or maybe to work on a bunch of related projects. It’s just, the time it takes to compile puts me off exploring it.

Workflow 5

I read (hxxps://discourse.julialang.org/t/output-doesnt-show-plots/111092/14) that the vscode plugin has a magic REPL that allows highlight-to-run code.

I imagine struggling to be comfortable in this:

  • it depends on a specific IDE; I use vscode too, but I also like to be able to work with notepad / gedit / vi in a pinch.
  • it allows running code out of order, which just confuses me and leads to unreproducible results and/or bugs

Workflow 6

I guess people use Jupyter? Which holds the Julia session open as its “kernel”?

This also allows (even encourages) running code out of order.

:cat: :penguin: :cat: :penguin: :cat:

What does every do to develop their scripts? I can’t be the only one struggling with analysis scripts. There’s a lot of people doing science and engineering in Julia and I wonder what you all do with your “I’ll just whip it up in matlab” instincts.

Thanks for sharing your tips!

1 Like

Welcome to the discourse!

I think Revise should be the least intrusive for your workflow.

It does if you set __revise_mode__ = :eval (thanks Google AI tips, reference: Configuration · Revise.jl).
Revise must be pretty sick in 1.12, now that constant redefinition has got a clean semantics. I haven’t switched to 1.12 myself yet but seriously considering it for that change alone.

For notebook workflow, I use Pluto.jl. For scripting, I find it more convenient than REPL+text file. The downside is that Pluto files require some assembly to run as standalone scripts due to internal package manager. On the other hand, it does not suffer from hidden state issues (especially those arising from defining something and later deleting the cell) as badly as Jupyter.

1 Like

I’m not sure if my workflow meets all your requirements, however, I will briefly describe it. I currently use VS Code AI focused alternatives and Quarto. I write my Julia code in .qmd files and can execute each cell in the desired order in the REPL, even mixing cells from different files. I also use .qmd files to combine code, markdown, and almost anything else. For rendering to PDF, I use a modified Quarto book template along with a modified KOMA-Script template and a modified bibliography (I rarely write purely scientific documents nor books, so I want the output to look a bit more … normal). I know there is currently a trend toward using Typst instead of LaTeX, however, I am not sure if there are Typst templates as comprehensive as KOMA-Script, and there are some limitations when using Typst with Quarto. Advantages: i) comp neuro aided programming, ii) flexibility when executing and testing code, iii) ability to mix code and text, iv) stunning PDF output in terms of typography and very good-looking HTML. Disadvantages: i) it’s not very portable because of Quarto, ii) rendering to PDF takes some time, iii) template modifications take time, iv) I’m reading “Typst is better suited than LaTeX for sharing with AI. […] and better for the environment”, I can’t comment on that as of now, perhaps. Jupyter can be quite heavy when a notebook contains many cells. Pluto is native to Julia. BonitoBook looks like an emerging star [ :- ) ]. I wrote this paragraph from a perspective of interactive analysis of static data. For near real time, I would recommend Julia native Genie Framework or something like this (Pluto and BonitoBook might be very useful as well).

2 Likes

I did not see this! I hope Revise can advertise that way more prominently. If that works then yes, I think that would suit my workflow just fine.

I’m still curious about other people’s workflows, so keep them coming. If Pluto is reliable about always turning code to results the same way (unlike Jupyter) then I could get into it, and I’m curious to see what the rest of the scene is like :slight_smile:

I’m curious if you’ve read the docs for includet (full disclosure, I haven’t until today)

By default, includet only tracks modifications to methods, not data. See the extended help for details. Note that this differs from packages, which evaluate all changes by default. This default behavior can be overridden; see Configuring the revise mode.

1 Like

I remember this first phrase, but I blanked on out “Configuring the revise mode” I guess. Thank you for digging that out.

I’ve tried it out :slight_smile:

# test.jl

module MyMain
  __revise_mode__ = :evalassign # ??? https://timholy.github.io/Revise.jl/stable/config/#Configuring-the-revise-mode

  X = [9,76,28,988]
  Y = collect(98*x^2 + 3*x - 3 for x in X)

  export X, Y

  println(Y)
end
julia> includet("test.jl")  # note: this prints Y's initial value
[7962, 566273, 76913, 95665073]

julia> using .MyMain

If I edit X = [9,0,28,0] then X gets updated

julia> X
4-element Vector{Int64}:
  9
  0
 28
  0

but Y doesn’t change, even though it depends on X:

julia> Y
4-element Vector{Int64}:
     7962
   566273
    76913
 95665073

It’s only updated if I edit its definition directly: Y = collect(97*x^3 + 3*x - 3 for x in X) =>

julia> Y
4-element Vector{Int64}:
   70737
      -3
 2129425
      -3

I can add new variables:

  D = [88, 99, 111]
  export X, Y, D
julia> D
3-element Vector{Int64}:
  88
  99
 111

but I can’t delete them; reverting the change back to export X, Y leaves D in memory

julia> @which D
Main.MyMain

julia> D
3-element Vector{Int64}:
  88
  99
 111

Also no point does the print() get run, because it’s “not an assignment”. I love my prints! I use them all the time.

Revise isn’t running my script straight through. It’s trying to save time by being clever, but I wish it wouldn’t because then we’re back to hidden notebook state. Hidden state is 90% okay so long as the script runs fully so that it can rederive all the values it cares about.

Plus with “classic Revise” I need to wrap my code (unfortunately not Literate compatible)

module X
  __revise_mode__ = :evalassign
  ...
end

# ...
julia> includet("test.jl")
[7962, -3, 76913, -3]

julia> using .MyMain

whereas with the @eval Main method I have to wrap it in

function main()
@eval Main begin
...
end
end
main()

# ...
julia> includet("test.jl")
julia> main()
julia> main()   # once for each edit
julia> main()
julia> main()
julia> # ....

Which is about equal boilerplate, and this way I get my prints out. So it’s cool that Revise has options to make things more automatic but I’m not sure they help me :thinking:

I feel like I must be bringing some biases from other languages in and misunderstanding something fundamental about the workflows people have for Julia. I am assuming that I can julia test.jl as the one true way to run code, but it’s not: includet() and entr() and Pluto all launch it differently. But if that’s not the way people are supposed to work with Julia I can adapt. But I would also be curious to know if maybe there just isn’t a consensus on how best to launch/test/iterate/explore code.

Since you identified what background you have, it’s easier to make a specific comparison.

CPython is interactive and fast (yes it is!) because it’s an interpreted language that wraps other languages’ optimized code, compiled and cached ahead of time. The interpretation layer is indeed not optimal, but that wouldn’t matter when 99% of your runtime is running that cached code. If you were JIT-compiling that code instead of loading from a cache, you’d experience the same compilation latency in Python as you are with julia -i now. That’s actually possible, there are Python implementations or packages with JIT compilers for when loading cached code isn’t enough.

So the obvious question is the inverse: how do you cache compiled Julia code? In principle, it’s not different from how other languages do it; on some level, you make the effort of specifying the exact versions of dependencies and informing the compiler to compile some call signatures for methods. A Julia environment specifies dependency versions, and a package can specify call signatures to precompile (PrecompileTools.jl also helps) and cache for that environment. julia -i does default to an environment for that Julia version, but an input .jl file is insufficient for precompile routines. For one, it’s actually ambiguous what a method does without a known spot in a parent module; you mentioned you evaluate scripts in Main, but the timing can change what happens. Someone made an experimental package to precompile a lone script inside its own module, but that wasn’t kept up, let alone broadly used and tested. It’s also worth mentioning that caching compiled code doesn’t solve all latency. You still have to load that code and input data before your script gets to number-crunching, though that’s true for any language.

If packages with custom precompile routines aren’t feasible to make or useful to load in script runs, I’d include scripts into separate modules to switch the REPL prompt between (or includet if you want some, not all, edits of source files to automatically evaluate) and keep the session alive as long as I need them all. If you can afford the overhead, you can even open several instances of your command prompt to run separate Julia sessions; this gives extra flexibility because each session can only irreversibly load one version of a package (you can switch environments in a session but it’ll warn you if a restart is needed to load the right version of a package).

How the REPL can navigate scripts in separate modules
julia> module Session1 end;

julia> module Session2 end;

julia> using REPL; REPL.activate(Session1)

(Main.Session1) julia> x = 1 # pretend this included a script
1

(Main.Session1) julia> using REPL; REPL.activate(Main.Session2)

(Main.Session2) julia> x = 2 # pretend this included a different script
2

(Main.Session2) julia> using REPL; REPL.activate(Main)

julia> Session1.x, Session2.x
(1, 2)

Bear in mind that keeping a script and its compiled code alive in a session is not the same thing as rerunning scripts in fresh sessions from the command-line (load latency even in the best case scenario), and you shouldn’t re-include scripts in the REPL in an attempt to replicate the effect (re-evaluations occur at different states, errors can happen, overwritten methods force recompilation and defeats the purpose). Instead, refactor script subroutines into methods, like a @main, and call those again. Note that the vague recommendation to “avoid globals” in Python also includes this practice.

1 Like

It looks like you can benefit from Pluto because such re-evaluation is exactly what it does.

But it also needs some getting used to. Typically, the first limitation people notice is that you cannot define a variable in one cell and re-assign it in another one.
Another issue is that mutation of global variables is allowed but the order of execution is unspecified if mutations are in different cells. That’s not to say the result is always non-deterministic, it’s that sometimes you’ll need to be careful to get deterministic result.

Based on your example:

X = [9,76,28,988]

Y = @. 98*X^2 + 3*X - 3

This produces Y = [7962, 566273, 76913, 95665073]


X = [9,76,28,988]

Y = @. 98*X^2 + 3*X - 3

X = [9,0,28,0]

This is disallowed due to multiple definitions of X


X = [9,76,28,988]

Y = @. 98*X^2 + 3*X - 3

X .= [9,0,28,0]

This is allowed but the value in Y is indeterministic: it can be either [7962, 566273, 76913, 95665073] or [7962, -3, 76913, -3].


To ensure correct ordering, you need to put the definition of Y either in the same cell as definition of X or in the same cell as mutation of X, or avoid mutations altogether:

# deterministic: Y = [7962, 566273, 76913, 95665073]
begin
    X = [9,76,28,988]
    Y = @. 98*X^2 + 3*X - 3
end

X .= [9,0,28,0]
# deterministic: Y = [7962, -3, 76913, -3]

X = [9,76,28,988]

begin
    X .= [9,0,28,0]
    Y = @. 98*X^2 + 3*X - 3
end
# Redefinitions within the same cell are also allowed
begin
    X = [9,76,28,988]
    X = [9,0,28,0]
    Y = @. 98*X^2 + 3*X - 3
end
1 Like

I sometimes use something like

my_data() = [1,2,3]

my_analysis(; data = my_data()) = sum(data)

And in the REPL just run my_analysis() which gets updated (with Revise) whenever I update the data or the analysis function.

In addition, by adding

function (@main)(args)
     my_analysis()
     return nothing
end

I run the final script with julia script.jl, for example to run this in a cluster submissio

1 Like

You can make it reliable, I guess no question about it. The same applies to Jupyter, though the workflow will differ slightly in each case. I still sustain it’s worth exploring BonitoBook and Quarto, as workflows based on them each have their own advantages and disadvantages. It’s not quite clear where you are going. I mean the aim you want to reach. I believe, ultimately, it’s a matter of taste which one suits you best. I also sustain that in a significant number of cases, the type of workflows you described in your initial post might find using Julia excessive (overshot). The analysis might probably benefit more from using a database and dedicated plotting software instead.

Hey @kousu, I haven’t seen any new posts from you lately. I just realized that maybe my last post might have offended you. If that’s the case, I want to say that by any means it was my intention. Perhaps I misunderstood your initial question, however, based on my understanding at the time, I believe using tools like Grafana or Superset or for more customized charting, using Genie, can be more productive for interactive workflows. Sorry again, English is not my native language.

I don’t really understand your constraints still.

  1. You want to run all code with julia my_script.jl
  2. You want to have a single script in the global level (not a main() function, and would like to use Literate.jl
  3. You want easy debugging. (For this, you should use @infiltrate from Infiltrator.jl).
  4. You don’t want to just do include("myscript.jl") because (a) That is less easy than julia myscript.jl at the terminal and (b) There might residual issues with the global state being modified (which Revise.jl helps with, but not entirely in a script).

Yeah that is indeed a hard problem and I don’t know what the solution is.

Could I point out:

I put this together for among other things use cases that remind me of what you are describing

When debugging, only parts of the code that depend on your fix will be rerun. In addition, even after a crash, saved backups will be loaded again in memory.

When performing sweeps, only runs at parameter values not covered by previous simulations are actually being run.

You can bring in all variables you define by issuing a make statement. If the variable has been evaluated in a previous or current session, and if no code edits where made in its recipe or those of its dependencies, this will happen instantaniously. This should make the generation of fiigures for inclusion by Literate possible and efficient.

Thank you all for chipping in your ways! I am learning a lot just seeing what people use. I’m not married to any workfow, I’m just discovering that the ones I’ve been using all seem slow or kludgey and I’m thinking they can’t be what the community does.

An no, @j_u I absolutely am not offended. I just had some work to get done (in matlab, which crashed a lot on me while I was doing it :face_with_crossed_out_eyes: , I wish I could have done it all in Julia :stuck_out_tongue: ). Please know that I appreciate you taking the time to share your knowledge with a newbie here.

@lmiq’s approach makes sense and is relatively straightforward. I’m curious if it’s a pretty common approach.

@krcools I’ve starred your make project! It looks advanced. I especially like the sound of the @sweep macro.

I’ve also installed BonitoBook and Pluto and I’ve been giving them a bit of a test-drive. (And @Vasily_Pisarev I appreciate a lot that you included Pluto’s rough edges up front).

Ideally julia my_script.jl is the final product for sharing. Is that pretty typical? Or do people ship julia projects expecting them to be run with using Project; Project.run()?

include()

I never gave include("my_script.jl") enough of a chance. I think I must have read the REPL guidelines and skipped thinking too much about them because though the guide says

Explore ideas in the REPL. Save good ideas in Tmp.jl. To reload the file after it has been changed, just include it again.

it seemed like you’d have to edit every Tmp.say_hello() to say_hello() when copying between REPL and script or back again and that seemed like a non-starter to me. So I jumped to the Revise section and followed their workflow tips which was never quite attuned. Thank you for making me take a second look @pdeffebach. With:

module Helpers
  H(x) = 2x
  export H
end

using .Helpers
display(H)
display(H(8))
julia> include("mod.jl")
H (generic function with 1 method)
16

julia> H(7)
14

if I then change

--- a/mod.jl
+++ b/mod.jl
@@ -1,5 +1,5 @@
     module Helpers
-      H(x) = 2x
+      H(x) = 3x
       export H
     end

and reload it in the same session I get a scary error

julia> include("mod.jl")
ERROR: LoadError: UndefVarError: `H` not defined in `Main`
Hint: It looks like two or more modules export different bindings with this name, resulting in ambiguity. Try explicitly importing it from a particular module, or qualifying the name with the module it should come from.
Stacktrace:
 [1] top-level scope
   @ ~/mod.jl:7
 [2] top-level scope
   @ REPL[3]:1
in expression starting at /home/kousu/mod.jl:7

I vaguely remember trying this in the spring and getting scared off and not following this path any further. But actually the new code is loaded, it’s just not accessible unqualified

julia> Tmp.H(7)
21

julia> H(7)
ERROR: UndefVarError: `H` not defined in `Main`
Hint: It looks like two or more modules export different bindings with this name, resulting in ambiguity. Try explicitly importing it from a particular module, or qualifying the name with the module it should come from.
Stacktrace:
 [1] top-level scope
   @ REPL[4]:1

So I don’t really understand what using does. I was mentally translating it as from .X import * but that’s not right; instead it’s, I guess, adding a search path to the current scope, and moreover instead of having later usings shadow earlier ones any conflicts are an explicit error so re-running the same using doesn’t work the way I thought.

However, I can get the behaviour I was expecting:

module Helpers
  H(x) = 2x
end

H = Helpers.H

display(H)
display(H(8))
julia> include("mod.jl")
H (generic function with 1 method)
16

julia> # edit mod.jl to change 2 -> 3

julia> include("mod.jl")
H (generic function with 1 method)
24

and it runs fast and identically on the CLI:

$ julia mod.jl 
H (generic function with 1 method)
24

So actually I think probably what I should do is:

  • get Revise out of my startup.jl; removing it already saves a noticeable amount of startup time because whatever it does to work its magic takes a sec
  • Stop using $ julia -i my_script.jl, because that loses JIT compilation each time
  • Switch to using julia> include("my_script.jl") in the REPL, which saves JIT compilation time
    • Put helper functions and (especially) structs in nested modules
    • Import helpers explicitly with A, B, C = HelperMod.X, HelperMod.B, HelperMod.C,
    • Put all the main analysis steps at the bottom of my file (implicitly in Main)

H = Tmp.H is isn’t as elegant as using .Tmp or using .Tmp: A, B, C because I need to list everything I want to use twice

A, B, C, D, E, F = Tmp.A, Tmp.B, Tmp.C, Tmp.D, Tmp.E, Tmp.F

but for analyses I probably won’t have a huge number of helpers so it should be manageable.

Are there any hidden gotchas to this approach? Does anyone else use it? If not, why isn’t it mentioned in the juliadocs?

This is overkill. I think the better workflow that is closer to this is

  1. Put your helper functions and methods in files that just define functions but don’t do work themselves. No need to put them in modules, though, it can all just be in a flat hierarchy defined in Main.
  2. Keep Revise.jl in your global environment.
  3. Use includet("my_script.jl"). includet will automatically keep track of different methods, such that when you play around at the REPL, the method definitions that you have will be up-to-date. Your global state might get messy (if you define a temporary variable x at the REPL, it will hang around). So you can go back and forth between playing around at the REPL and adding to your script.

EDIT: Point 3 appears to not be true. includet only tracks methods defined in the file, not via includes within that file.

This will not replicate python -i in the same way. There is not a fast workflow which does what you want, unfortunately.

1 Like

Yeah, that’s the problem!

Instead of include("my_script.jl") on every change I could includet("my_script.jl") once, but anything I want to edit live still needs to be in modules (modules in my_script.jl) for Revise to pick them up so the code ends up looking the same. And as I explored above, Revise is smart but not smart enough to recompute all downstream values; I think include() is the deterministic option.

Either way is typical and they’re not mutually exclusive; your script can have using Project at the top. Package imports are how cached compiled code is loaded (like in Python), and you may have to make a custom package to cache code if the primary packages don’t cache what you need.

1 Like

One thing where include() is inferior to Revise is tracking methods definitions. Not a problem if you only write functions with duck-typed arguments. Once you start adding type constraints, you may again hit hidden state issues.

Example:

# old code

function foo(x::Vector{Float64}, a::Float64)
    return x .+ a
end

# updated code #1 - you found signature too narrow
# a more specific method will be called for (::Vector{Float64}, ::Float64)
# but it does the same
# nothing bad so far
function foo(x::AbstractVector{<:Real}, a::Real)
    return x .+ a
end

# updated code #2 - turns out, adding only half of `a` gives better foo-ing
function foo(x::AbstractVector{<:Real}, a::Real)
    return x .+ (a / 2)
end

# now the methods diverge, and the specific method is hidden state

Maybe you can try something like this:

@static if isdefined(Main, :Revise)
    const include = Revise.includet
else
    const include = Base.include
end

Then, nested includes go through Revise if it’s loaded.

3 Likes

Just to explicitly show how the footgun described above works: You updated code, actually re-run the script twice - the old code is still here (except you are using Revise). Now you get:

julia> x = [1.0];

julia> foo(x, 1.0)
1-element Vector{Float64}:
 2.0

julia> foo(x, 1)
1-element Vector{Float64}:
 1.5
3 Likes

Earlier I mentioned that Revise automatically turns some edits of tracked source files into runtime evaluations of edited expressions, intending to distinguish it from an often assumed effect of a fresh run of the entire code in a fresh session. When a method is edited, it doesn’t just evaluate the new method definition, it also deleted the previous one before doing that, precisely to deal with method signature changes. That is manually possible with some reflection and the internal Base.delete_method. As pointed out, rerunning an edited script e.g. include won’t do any of that automatically.

1 Like