I wrote my first program in Julia today, which is a solution to Advent of Code day 15.
I wrote it in a functional style at first and was very surprised with how slow it ran.
I narrowed it down to the score functions. Here are 3 versions of it:
# Using .|> and |>
score_pipe((boxnum, box)::Tuple{Int, Vector{Lens}})::Int =
box |> enumerate .|>
(((slot, lens)::Tuple{Int, Lens},) -> boxnum * slot * lens.focal)
|> sum
# Using sum(f, iter)
score_sum_fn((boxnum, box)::Tuple{Int, Vector{Lens}})::Int =
sum(enumerate(box); init=0) do (slot, lens)::Tuple{Int, Lens}
boxnum * slot * lens.focal
end
# Using a for loop
function score_loop((boxnum, box)::Tuple{Int, Vector{Lens}})::Int
result = 0
for (slot, lens) in enumerate(box)
result += boxnum * slot * lens.focal
end
result
end
Why is there such a huge difference between 3 and 1? Is it bad practice to write functional code in Julia? That would be unfortunate.
Why do any of these do allocations at all? They’re simply iterating over a vector and accumulating everything into a single number, there shouldn’t be any heap allocations.
@time is a very naive timing mechanism. It includes everything, no matter whether its compilation or GC work that just happened to occur during its invocation. It also only measures once, so is subject to noise on tight measurements.
Also, someone correct me if I am wrong, but the type annotations in your code are completely unnecessary here. They can be useful to restrict valid input types of the function arguments but the other ones are just distraction.
Defined distinct notions of type stability and type groundedness.
Type stability (return type inserted) is enough to prevent propagation. Propagating instabilities can of course be quite bad.
The functions that take up most of your runtime being grounded is enough for instabilities to not be a major runtime performance concern (but they can still negatively impact compile times!). E.g., a function barrier is grounded, but the code calling it isn’t.