Should closures be avoided?

There seem to be several performance-related issues with closures (boxed variables) that can be tricky to understand when they will arise. Should closures be avoided as a rule? What is an alternative design pattern? They are very convenient and readable in general.

1 Like

Please read the performance tips which discusses this. You can recover full performance by capturing variables manually in a let block.


If the code is not a performance bottleneck, especially in one-off scripts, I’ll use closures and indulge in type instabilities. That’s the benefit of using a dynamically typed language. Not all code needs to be hyper-optimized.


Just to add to the answers, another option is to define a callable struct as shown here and create an instance of the struct instead of the anonymous function. This is essentially making an explicit definition on what the closure is.


Isn’t it sufficient to check with @code_warntype, whether any variables are boxed and eliminate if neccessary?
Or are there other reasons why closures might be slower?
The way I see it, the pitfall is not as bad as it sounds.
Write your code in the obvious way. Then check if it is performant enough. If not, eliminate all type instabilities.
Unless I am missing something, the correct usage of closures doesn’t require you to always be aware of where exactly difficulties can arize.

1 Like

I write a lot of optimization code that requires different fitting, loss, and weighting functions that plug into a robust optimizer. These functions have to be fast, but they also need to be convenient since we are often testing new ideas. So my use case is a “worst-case” scenario for the closure boxing issue. I read elsewhere that other people writing optimization code were also encountering this issue. I was writing callable structs as @Norman suggested, but it creates a lot of boilerplate code. I suppose judicious use of FastClosures might help, but I don’t understand the implications of that macro. @code_warntype checking is also not a panacea although it is suggested that new versions of Cthulu might be easier to use.

Indeed, closures cause bad performance far more often than I’d like—especially because of how nice the syntax for comprehensions and generators is, and how common the pattern of assigning variables inside conditionals is. IMO the objective should be to improve the boxing situation, but I can’t blame anyone for following a heuristic of avoiding closures until that happens.

There was a similar thread a few days ago; do you find this helpful?


I would say at this point I’m focused on mathematical correctness but I have noticed the type instabilities proliferating with the use of closures and when I put functions into structure members (even when parameterized). I don’t really understand why, and I don’t really have a big time budget to dig into the more subtle issues, unfortunately. I guess there are two packages meant to deal with this: FastClosures and FunctionWrappers. I will need to fix it eventually to achieve state-of-the-art runtime. I’m worried that despite using fairly idiomatic code that I will have to tear it apart to get the speed needed.

I am competing against C++ frameworks which are hand-tuned. While we will have an algorithmic win, it won’t matter if I can’t close the performance gap in wall clock time. (We are trying to use Julia to publish). I’ve done static polymorphism (compiile-time) in C++ (expression templates) Eigen, etc… and there you know when you write your code it will be fast. I would say that if you stick to the design patterns; it is also elegant. The drawback with C++ is you lose a reasonable REPL. But I wonder what the time tradeoff will be between chasing down type instabilities and slower development time in C++.

Having functions encoded as part of the type is not really idiomatic code. Maybe a MWE of a what you are doing and your performance pitfalls would be helpful?

Once you know what to avoid, closures are usually fast. I use them in high performance inner loops pretty much everywhere. But there are few other gotchas to closures, like If you pass a variable Type into a closure you lose type stability as it is stored as a field in the anonymous struct.

If you share some code for your unstable closures you may get better specific feedback.

1 Like

Yes, I think I will release the framework I’m working on to get some feedback.

You will get more feedback on small MWEs posted here than a framework


If you use the let block trick, then accessing the closure’s capture will be type-stable at least.

Out of curiosity, what’s your purpose for using FunctionWrappers?

1 Like
typstruct Irls{D <: AbstractVector, F, R, S, W}

Irls(data, fit, residual, scale, weight) = 
    Irls(data, similar(data,Float64), similar(data,Float64),
         fit, residual, scale, weight)

function (irls::Irls)(; kwargs...)
    num_data = length(
    resize!.((irls.residuals, irls.weights), num_data)
    irls_fit() =
    irls_fit(weights) =, weights)
    residuals = (location)->irls.residuals .= irls.residual.(, Ref(location))
    weights = (residuals, scale)->irls.weights .= irls.weight.(residuals,scale)
    return irls(irls_fit, residuals, weights, irls.scale; kwargs...)

Above is a structure that defines an iteratively reweighted least squares problem. the fitting function, residuals, weights, and scale estimation are all functions that define the irls behavior. The struct is holding the functions to be called in the irls loop. I define the problem by initializing the structure and then pass it on to the code that needs to use it.

I suspect that storing functions in a struct like this is causing dynamic dispatch or type instability in my code. I was going to try FuncitonWrappers to resolve it. at this point I’m not sure what is causing the type instability.

Hm, I don’t think so—the struct is being type-parameterized so this code appears to be type-stable afaict.

That said, if you’re storing such structs in an array, and if different structs are holding different fitting functions (by which I mean functions with different function bodies, not just closures with different captured values) then accessing the elements of that array will be type-unstable because it won’t have a concrete element type. To illustrate:

julia> z=[Irls([1,], x->x, [1,], 1, [1,]), Irls([1,], x->x, [1,], 1, [1,])]
2-element Vector{Irls{Vector{Int64}, F, Vector{Int64}, Int64, Vector{Int64}} where F}:
 Irls{Vector{Int64}, var"#56#58", Vector{Int64}, Int64, Vector{Int64}}([1], [6.13256783589e-312], [7.15252505199e-312], var"#56#58"(), [1], 1, [1])
 Irls{Vector{Int64}, var"#57#59", Vector{Int64}, Int64, Vector{Int64}}([1], [7.165628674093e-312], [6.11134787798e-312], var"#57#59"(), [1], 1, [1])

julia> eltype(z)
Irls{Vector{Int64}, F, Vector{Int64}, Int64, Vector{Int64}} where F

julia> isconcretetype(eltype(z))

(notice that the function types are different—var"#56#58" vs var"#57#59"; although the functions do the same thing, they’ve been declared separately.)

If indeed it’s an array access that’s type-unstable, then I think FunctionWrappers.jl should help, though I haven’t used it. Other options include manually setting the array’s element type as a narrow Union of types, or passing the array element through a function barrier to infer its type before doing repetitive operations on it.

Note that this is just speculation until you can provide a MWE showing the type instability.

1 Like

This is also a pretty extreme use of closures passed to closures calling functions from struct fields. You might see the compiler giving up and not tracking types through all of that.

Cthulhu.@descend may help understand whats happening.

1 Like