Passing arguments explicitly to a closure or not

Hi there,

I am having problems to understand which of the following two scenarios is correct or if they are similar and have the same performance.

function input()

  # define variables and functions inside the
  # `input` function scope.
  κ(t) = t
  θ(t) = sin(t)
  σ(t) = cos(t)
  α = 1.000
  β = 0.000

  # define a function using variables and 
  # functions defined in the outer scope
  fμ = (du, u, t) -> begin
    r = u[1]
    du[1] = κ(t) * (θ(t) - r) + σ(t) * sqrt(α(t) + β(t) * r)
  end

  # use the previous function to solve a problem, which 
  # requires evaluating the fμ function millions of times
  solve(fμ)
end

Or should I explicitly pass the parameters to the function:

function input()

  # define variables and functions inside the 
  # `input` function scope.
  κ(t) = t
  θ(t) = sin(t)
  σ(t) = cos(t)
  α = 1.000
  β = 0.000

  # define a function WITHOUT using variables and 
  # functions in the outer scope
  fμ = (du, u, p, t) -> begin
    r = u[1]
    α = p.α
    β = p.β
    κ = p.κ
    σ = p.σ
    θ = p.θ
    du[1] = κ(t) * (θ(t) - r) + σ(t) * sqrt(α(t) + β(t) * r)
  end

  # build the 'parameters' object:
  p = (κ = κ, θ = θ, σ = σ, α = α, β = β)

  # use the previous function to solve a problem, which requires
  # evaluating the function fμ millions of times but, in this case,
  # the parameters object `p` is provided and is used for the
  # evaluation of fμ inside the solve function.
  solve(fμ, p)
end

I think that, in the first case, I am defining a closure . Please, let me know if that is correct. However, I would like to know if the performance of that closure is the same as the performance of the function in the second case.

Probably the answer is around here. I am suspecting they have the same performance.

Thank you very much!

1 Like

Honestly I don’t know which one is better. But I can offer a suggestion on how you can find the answer. You can benchmark both functions, for instance using @btime input() from the package BenchmarkTools to determine which method is faster.

1 Like

Hi, thank you for your suggestion but I need the explanation of the situation, not just testing it. Otherwise, I would always suspect if I have tested a particular situation.

Again, thanks for taking the time!

Seems like micro-optimization to me. It has its places but the problem with answering questions like these is that the answer may change as the Julia compiler becomes better. Closures in Julia are generally fast unless you hit the infamous type inference closure bug performance of captured variables in closures · Issue #15276 · JuliaLang/julia · GitHub. Just make sure your code is type stable using @code_warntype and you should be fine imo :slight_smile: But if you want to go down the micro-optimization rabbit hole, then good luck, @btime, @code_typed and @code_llvm are your friends there!

3 Likes

First of all, sorry for the delay. I could not reply earlier because ‘reached the maximum number of replies a new user can create on their first day.’

Regarding your comments, for me, these are good news :slight_smile:

The following two examples show that both approaches are virtually the same (concerning performance). I used DifferentialEquations.jl package for solving a system of Stochastic Differential Equations using many Monte Carlo trials or trajectories, which involve the evaluation of closures or functions (depending on the example) many times.

using DifferentialEquations
function input1()

        r₀ = 0.010
        X₀ = 1.000
        σₓ = 0.020
        a = 1.0
        b = 1.0

        # these are all closures
        κ(t) = ((((a * t^2 + b) - b) + 1.0) / (t^2 + 1.0)) * 0.4363
        σ(t) = ((((a * t^2 + b) - b) + 1.0) / (t^2 + 1.0)) * 0.1491
        θ(t) = ((((a * t^2 + b) - b) + 1.0) / (t^2 + 1.0)) * 0.0613
        α(t) = ((((a * t^2 + b) - b) + 1.0) / (t^2 + 1.0)) * 1.0
        β(t) = ((((a * t^2 + b) - b) + 1.0) / (t^2 + 1.0)) * 0.0
        μX(x, y) = y * x
        σX(x) = σₓ * x
        μr(x, t) = κ(t) * (θ(t) - x)
        σr(x, t) = σ(t) * sqrt(α(t) + β(t) * x)

        # these are also closures
        f = (du, u, p, t) -> begin
                X = u[1]
                r = u[2]
                du[1] = μX(X, r)
                du[2] = μr(r, t)
        end
        g = (du, u, p, t) -> begin
                X = u[1]
                r = u[2]
                du[1] = σX(X)
                du[2] = σr(r, t)
        end

        trials = Int(1e5)
        u0 = [X₀, r₀]
        tspan = (0.0, 1.0)
        p = nothing
        sde = SDEProblem(f, g, u0, tspan, p)
        SDE = EnsembleProblem(sde)
        @time sol = solve(SDE, SRIW1(), trajectories = 1, seed = 1)
        @time sol = solve(SDE, SRIW1(), trajectories = trials, seed = 1)
        return sol
end
@time input1()
#24.613213 seconds (69.05 M allocations: 14.776 GiB, 7.81% gc time)
# 5.648566 seconds (53.15 M allocations: 2.912 GiB, 24.83% gc time)
#31.315092 seconds (123.88 M allocations: 17.780 GiB, 10.77% gc time)

compared to:

using DifferentialEquations
function input2()

        r₀ = 0.010
        X₀ = 1.000
        σₓ = 0.020
        a = 1.0
        b = 1.0

        # these are all functions
        κ(t, p) = ((((p.a * t^2 + p.b) - p.b) + 1.0) / (t^2 + 1.0)) * 0.4363
        σ(t, p) = ((((p.a * t^2 + p.b) - p.b) + 1.0) / (t^2 + 1.0)) * 0.1491
        θ(t, p) = ((((p.a * t^2 + p.b) - p.b) + 1.0) / (t^2 + 1.0)) * 0.0613
        α(t, p) = ((((p.a * t^2 + p.b) - p.b) + 1.0) / (t^2 + 1.0)) * 1.0
        β(t, p) = ((((p.a * t^2 + p.b) - p.b) + 1.0) / (t^2 + 1.0)) * 0.0
        μX(x, y, p) = y * x
        σX(x, p) = p.σₓ * x
        μr(x, t, p) = p.κ(t, p) * (p.θ(t, p) - x)
        σr(x, t, p) = p.σ(t, p) * sqrt(p.α(t, p) + p.β(t, p) * x)

        # these are also functions
        f = (du, u, p, t) -> begin
                X = u[1]
                r = u[2]
                du[1] = p.μX(X, r, p)
                du[2] = p.μr(r, t, p)
        end
        g = (du, u, p, t) -> begin
                X = u[1]
                r = u[2]
                du[1] = p.σX(X, p)
                du[2] = p.σr(r, t, p)
        end

        trials = Int(1e5)
        u0 = [X₀, r₀]
        tspan = (0.0, 1.0)
        p = (
             a = a,
             b = b,
             r₀ = r₀,
             X₀ = X₀,
             σₓ = σₓ,
             α = α,
             σ = σ,
             σX = σX,
             μX = μX,
             σr = σr,
             μr = μr,
             κ = κ,
             β = β,
             θ = θ,
        )
        sde = SDEProblem(f, g, u0, tspan, p)
        SDE = EnsembleProblem(sde)
        @time sol = solve(SDE, SRIW1(), trajectories = 1, seed = 1)
        @time sol = solve(SDE, SRIW1(), trajectories = trials, seed = 1)
        return sol
end
@time input2()
#25.487122 seconds (68.86 M allocations: 14.767 GiB, 7.64% gc time)
# 5.888647 seconds (53.50 M allocations: 2.625 GiB, 23.99% gc time)
#32.384362 seconds (124.05 M allocations: 17.485 GiB, 10.52% gc time)

where I have passed as argument the parameters object p to all functions (really tedious).

So, to sum up, when defining a closure, as the documentation says, Julia creates a callable object with field names corresponding to captured variables and that is why it is fast.

Lastly, when defining closures, one must focus on the captured variables. Their performance is described in the documentation, here.

In my example above, the first case described in the documentation does not apply, while the second case, if the captured variables do not need to be boxed at all, I could use let blocks on the closures.

So now I am starting to see that the Julia documentation is complete :smile:

2 Likes