It can be hit or miss. Note that type stability can also make a huge impact on compile times, for example on Julia 1.7, type unstable, bar!(::Nothing)
takes about 9s to compile:
julia> using RecursiveFactorization
julia> foo!(A::Matrix) = RecursiveFactorization.lu!(A)
foo! (generic function with 1 method)
julia> foo!(A) = A
foo! (generic function with 2 methods)
julia> bar!(x) = foo!(Ref{Any}(x)[])
bar! (generic function with 1 method)
julia> t = time_ns();
julia> @time bar!(nothing)
0.000068 seconds (416 allocations: 29.359 KiB, 86.92% compilation time)
julia> 1e-9*(time_ns() - t)
9.088390749
Make bar!
type stable, and it’s more than 400x faster to compile:
julia> using RecursiveFactorization
julia> foo!(A::Matrix) = RecursiveFactorization.lu!(A)
foo! (generic function with 1 method)
julia> foo!(A) = A
foo! (generic function with 2 methods)
julia> bar!(x) = foo!(x)
bar! (generic function with 1 method)
julia> t = time_ns();
julia> @time bar!(nothing)
0.000000 seconds
julia> 1e-9*(time_ns() - t)
0.02120855
Note that @time
unfortunately forces a lot of compilation that it doesn’t time, so you need to copy/paste the surrounding block to actually time compilation.
There’s all sorts of opportunities for problems to creep in. E.g., recently, I used Returns
without realizing that Returns <: Function
, which resulted in a 3x compile time regression and a 2x runtime regression. Branches returning different types is of course another ubiquitious problem.
If you depend on a lot of packages and work on a large codebase, it is unfortunately difficult to avoid.
And perhaps the runtime of the code isn’t important, but type instabilities sometimes have a substantial impact on compile time performance, making latency unacceptable slow.
While I’m sometimes the source of the problem – I’ve written closures and unwittingly passed functions as arguments – given how much time I also spend looking at other people’s code for type instabilities, I’d prefer if they did pursue it obsessively. =)
Preferably, as a matter of principle rather than benchmark driven.
Why?
The same changes that dropped OrdinaryDiffEq’s compile time from 22 to 3 seconds introduced a seemingly innocuous type instability (inside a function barrier) resulting in a 50% increase in compile time of our code using OrdinaryDiffEq.
That was unfortunately far from the only example…
Something that works well or improves the situation in a small example can and does blow up and go the other way in a larger example.
EDIT:
But I do agree when it comes to scripts vs libraries.
Libraries are hard to view in isolation, but scripts and end-user apps aren’t.