Efficient closures

I have a function of many variables, hermite_interpolation(t, t_array, f_array, df_array), and I want to use this function as the argument of another function, that I shall call do_something(pulse). However, the argument pulse of the function do_something must be a function of the time t only. At first I was simply defining the function

pulse = t-> hermite_interpolation(t, t_array, filtered_f, filtered_df)

and feeding it to do_something. Afterwards I learned that it was actually better to define

pulse2 = let t_array = t_array,
             f_array = filtered_f,
             df_array = filtered_df
             t-> hermite_interpolation(t, t_array, f_array, df_array)

I benchmarked both functions, and I obtained

julia> @benchmark pulse($tf)
  memory estimate:  48 bytes
  allocs estimate:  3
  minimum time:     80.339 ns (0.00% GC)
  median time:      88.953 ns (0.00% GC)
  mean time:        101.802 ns (6.52% GC)
  maximum time:     17.307 μs (99.37% GC)
  samples:          10000
  evals/sample:     949
julia> @benchmark pulse2($tf)
  memory estimate:  32 bytes
  allocs estimate:  2
  minimum time:     57.724 ns (0.00% GC)
  median time:      64.536 ns (0.00% GC)
  mean time:        73.016 ns (5.11% GC)
  maximum time:     14.597 μs (99.49% GC)
  samples:          10000
  evals/sample:     973

I also benchmarked hermite_interpolation(t, t_array, f_array, df_array) and I got

julia> @benchmark hermite_interpolation($tf, $t_array, $filtered_f, $filtered_df)
  memory estimate:  0 bytes
  allocs estimate:  0
  minimum time:     29.318 ns (0.00% GC)
  median time:      31.083 ns (0.00% GC)
  mean time:        34.351 ns (0.00% GC)
  maximum time:     666.528 ns (0.00% GC)
  samples:          10000
  evals/sample:     995

Here we se that the original function hermite_interpolation(tf, t_array, filtered_f, filtered_df) is still much faster than the closures pulse and pulse2.

My question is, is there a way to define faster closures? I would appreciate any help.

It will be easier to help if you can provide a working self-contained example, otherwise we’re kind of just guessing.

However, the first thing I notice in your case is that pulse and pulse2 are themselves non-constant global variables. That means you need to interpolate them into the benchmark as well, e.g.:

@benchmark ($pulse)($tf)
@benchmark ($pulse2)($tf)

I guess you solved the problem. I was not benchmarking the functions correctly. So let us suppose that I define the functions

f1(x) = x + 1
f2 = x-> x+ 1

I thought that both definitions where equivalent, but apparently this is not the case. Could someone elaborate a bit about that?

The syntax

f1(x) = x + 1

implicitly makes f1 const. Try doing:

julia> f1(x) = x + 1
f1 (generic function with 1 method)

julia> f1 = "hello"
ERROR: invalid redefinition of constant f1

You can do const f2 = x -> x + 1 to get the same performance at global scope.

Note that this is only relevant for global variables: the slowness comes when you have a variable which is non-const and global. In a local scope you don’t need to worry about marking const (in fact, you can’t).


Great. Thank you for the detailed explanation!