@PetrKryslUCSD, thank you for posting benchmarks as a point to show my the example was harmless.
Posting that as an example of their safety is a perfect demonstration of the danger of closures, and why they should be avoided.
I can reproduce them (using a tuple of vectors):
fix_simple(x, i) = map(Base.Fix2(getindex, i), x)
clos_simple(x, i) = map(v -> v[i], x)
tuple_of_vectors = Tuple(rand(4) for _ in 1:10);
@benchmark fix_simple($tuple_of_vectors, 1)
@benchmark clos_simple($tuple_of_vectors, 1)
function clos(x, i)
i += 1
i -= 1
res = map(v -> v[i], x)
i += 1
i -= 1
res
end
function fix(x, i)
i += 1
i -= 1
res = map(Base.Fix2(getindex, i), x)
i += 1
i -= 1
res
end
@benchmark fix($tuple_of_vectors, 1)
@benchmark clos($tuple_of_vectors, 1)
For the simple example, I get
julia> @benchmark fix_simple($tuple_of_vectors, 1)
BenchmarkTools.Trial: 10000 samples with 1000 evaluations.
Range (min β¦ max): 4.875 ns β¦ 16.417 ns β GC (min β¦ max): 0.00% β¦ 0.00%
Time (median): 4.959 ns β GC (median): 0.00%
Time (mean Β± Ο): 4.978 ns Β± 0.208 ns β GC (mean Β± Ο): 0.00% Β± 0.00%
β β
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ β
4.88 ns Histogram: frequency by time 5.08 ns <
Memory estimate: 0 bytes, allocs estimate: 0.
julia> @benchmark clos_simple($tuple_of_vectors, 1)
BenchmarkTools.Trial: 10000 samples with 1000 evaluations.
Range (min β¦ max): 4.750 ns β¦ 15.083 ns β GC (min β¦ max): 0.00% β¦ 0.00%
Time (median): 4.875 ns β GC (median): 0.00%
Time (mean Β± Ο): 4.899 ns Β± 0.222 ns β GC (mean Β± Ο): 0.00% Β± 0.00%
β β
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ β
4.75 ns Histogram: frequency by time 5 ns <
Memory estimate: 0 bytes, allocs estimate: 0.
But as soon as we make it a little more complicated β our function is still unrealistically trivial compared to where such code may actually be used:
julia> @benchmark fix($tuple_of_vectors, 1)
BenchmarkTools.Trial: 10000 samples with 1000 evaluations.
Range (min β¦ max): 4.834 ns β¦ 19.834 ns β GC (min β¦ max): 0.00% β¦ 0.00%
Time (median): 4.959 ns β GC (median): 0.00%
Time (mean Β± Ο): 4.977 ns Β± 0.212 ns β GC (mean Β± Ο): 0.00% Β± 0.00%
ββ β
ββββββββββββββββββββ
ββββββββββββββββββββββββββββββββββββββ β
4.83 ns Histogram: frequency by time 5.08 ns <
Memory estimate: 0 bytes, allocs estimate: 0.
julia> @benchmark clos($tuple_of_vectors, 1)
BenchmarkTools.Trial: 10000 samples with 229 evaluations.
Range (min β¦ max): 327.690 ns β¦ 3.281 ΞΌs β GC (min β¦ max): 0.00% β¦ 87.27%
Time (median): 331.332 ns β GC (median): 0.00%
Time (mean Β± Ο): 337.029 ns Β± 104.356 ns β GC (mean Β± Ο): 1.08% Β± 3.14%
βββββββββββ ββ ββββ β
βββββββββββββββ
ββ
β
βββββββββββββββββββββ
ββ
ββββββββββββ
β
β
ββ
β
β
β
β β
328 ns Histogram: log(frequency) by time 360 ns <
Memory estimate: 272 bytes, allocs estimate: 12.
If people test code at all (unlikely), theyβll try microbenchmarks and prove their closure is perfectly safe.
Maybe theyβll even put it in a simple function, and prove it is safe.
Then put it in an actual realistic piece of code, and it tanks performance and no one even knows.
And, as my let
example shows from my previous comment, aside from randomly tanking performance depending on where you place your closure, it can also randomly start giving you incorrect answers.
Closures are a plague on Julia.
EDIT:
@greatpetβs solution also works, and would probably be acceptable, and definitely preferred over a closure without the let
. The point is defensive programming so code doesnβt accidentally get awful performance or wrong answers, and no one later has to spend hours of their time combing over a huge code base to find where all the allocations, type instabilities, etc come from.