Are calls to no-op functions passed as arguments always compiled away?


I really like the current trend in Base of functions that take a function as their first argument, and have been adopting it for my own code. My question is: If I define the no-op function

f(x) = x 

can I happily pass this into other functions of the form:

function f1(f::Function, x1, x2, ...)
   ... #some stuff
   z1 = f(x)
   ... #some other stuff

and get zero efficiency loss versus the case where I don’t include a function as an argument?

My understanding is that in the above pseudo-code, the z1 = f(x) is compiled down to z1 = x, and so this is indeed the case, but I wanted to double-check before adopting this style wholesale. (yes I did try looking at code_lowered type output, but managed to quickly confuse myself, and also wasn’t certain if what I was seeing would hold up in more complex cases).

Bonus question: does julia have a standard no-op function for use in cases like this, eg say if I wanted to define the shortcut method:

    f1(x1, x2, ...) = f1(Base.f_noop, x1, x2, ...)

Thanks to all responders,



does julia have a standard no-op function for use in cases like this

Base.identity might be what you’re looking for.

They certainly can be, if things get properly inlined/the compiler infers that it can remove the call:

julia> f(g, x, y) = g(x) + g(y)
f (generic function with 1 method)

julia> @code_typed f(identity, 1, 1)
        return (Base.add_int)(x, y)::Int64

julia> @code_typed f(sin, 1, 1)
        $(Expr(:inbounds, false))
        # meta: location math.jl sin 421
        SSAValue(3) = (Base.sitofp)(Float64, x)::Float64
        # meta: location math.jl sin 419
        SSAValue(5) = $(Expr(:foreigncall, ("sin", "libopenlibm"), Float64, svec(Float64), SSAValue(3), 0))
        # meta: location math.jl nan_dom_err 300
        unless (Base.and_int)((Base.ne_float)(SSAValue(5), SSAValue(5))::Bool, (Base.not_int)((Base.ne_float)(SSAValue(3), SSAValue(3))::Bool)::Bool)::Bool goto 10
        #temp#@_6 = (Base.Math.throw)($(QuoteNode(DomainError())))::Union{}
        goto 12
        #temp#@_6 = SSAValue(5)
        # meta: pop location
        # meta: pop location
        # meta: pop location
        $(Expr(:inbounds, :pop))
        $(Expr(:inbounds, false))
        # meta: location math.jl sin 421
        SSAValue(0) = (Base.sitofp)(Float64, y)::Float64
        # meta: location math.jl sin 419
        SSAValue(2) = $(Expr(:foreigncall, ("sin", "libopenlibm"), Float64, svec(Float64), SSAValue(0), 0))
        # meta: location math.jl nan_dom_err 300
        unless (Base.and_int)((Base.ne_float)(SSAValue(2), SSAValue(2))::Bool, (Base.not_int)((Base.ne_float)(SSAValue(0), SSAValue(0))::Bool)::Bool)::Bool goto 26
        #temp#@_5 = (Base.Math.throw)($(QuoteNode(DomainError())))::Union{}
        goto 28
        #temp#@_5 = SSAValue(2)
        # meta: pop location
        # meta: pop location
        # meta: pop location
        $(Expr(:inbounds, :pop))
        return (Base.add_float)(#temp#@_6, #temp#@_5)::Float64


Ah, identity, I should have thought to try that.

Many thanks, I’ll mess around with @code_typed a bit. The example you showed is very clear.




As a word of warning, there is still something broken with singletons on 0.7 master. See

using BenchmarkTools;

@code_native f(nothing)
; Function f {
; Location: REPL[2]:1
	movq	%rsi, -8(%rsp)
	movabsq	$139684048367624, %rax  # imm = 0x7F0ABA157008

x=fill(nothing, 1000); y=copy(x); sizeof(x)

@btime $y .= f.($x);
30.457 ��s (489 allocations: 7.64 KiB)

@btime $y .= identity.($x);
15.317 ns (0 allocations: 0 bytes)

If you have no missing data or don’t care about speed in the next couple of weeks until this gets fixed (not by me, I don’t dare touch codegen), then you have no problem; but this issue makes some benchmarks misleading in the meantime.

edit: Fixed now in Merci to @kristoffer.carlsson and vtjnash!


Good to know, thank you. Fortunately, I don’t need to worry about missing for now.




Functions all have their own types. Just like how Julia specializes on 1 + 1 and 1.0 + 1.0 to call integer and floating point addition, respectively, when you call f1(identity, x, y, z), it’ll specialize on the types of those arguments and do all sorts of optimizations since it knows what all the types are. You can even dispatch on specific function types:

julia> f(::typeof(identity)) = 1
       f(::typeof(sin)) = 2
f (generic function with 2 methods)

julia> f(identity)

julia> f(sin)

julia> f(cos)
ERROR: MethodError: no method matching f(::typeof(cos))
Closest candidates are:
  f(::typeof(sin)) at REPL[1]:2
  f(::typeof(identity)) at REPL[1]:1


That’s a neat trick! I had no idea you could do that. I’m guessing this is a fairly new feature? I seem to remember a year or two ago that typeof(sum) would evaluate to Function, so that typeof(sum) == typeof(identity) would evaluate to true. (I just verified for myself then that it now evaluates to false)

So every defined function can now implicitly be thought of as its own type? Or perhaps a better analogy would be its own parametric type… something like Function{T} where {T<:Union{sum, identity, ...}}, so that we can still write things like f1(f::Function, x) and have it work for any function f?

I just played around a bit and realised the same holds for anonymous functions too, and you can dispatch on them, with the caveat that f1 = (x -> identity(x)) and f2 = (x -> identity(x)), are different function types. Does this mean that anonymous functions are all created and stored in global scope, such that you couldn’t write a loop that creates anonymous functions indefinitely, since they would never be garbage collected?

Sorry, I just realised that is a lot of questions. I should probably re-read the manual at some point. I think the last version I read was v0.3…




The manual is definitely your friend here, but this has changed a lot in the last couple of years :smile:

Before Julia v0.5, Function was the concrete type of every function (as you remember correctly), and anonymous functions were inherently slower than ordinary functions. As of Julia v0.5, Function is now an abstract type, and each named and anonymous function is a separate concrete type which is <: Function. You can certainly still write f1(f::Function, x) and any named or anonymous function will work for f, but now you can also dispatch on the type of a particular function (if you want). This change is also what made anonymous functions just as fast as regular functions, which in turn enabled fast broadcast fusion and lots of other fun features. Here’s the most relevant PR:

As of v0.5 and above, an anonymous function creates a new, callable type. Closures are just callable types with fields containing their closed-over values. You can actually see this:

# We have to put this in a function in order to create a real 
# closure instead of just a function that references some global
# variable named `i`
julia> function closure_demo()
         i = 1
         f = x -> x + i
closure_demo (generic function with 1 method)

julia> f = closure_demo()
(::#11) (generic function with 1 method)

julia> f.i

julia> f(2)

As far as I know, the compiled code from a function is indeed never garbage collected. However, that code is only generated once per anonymous function definition. So, for example, we can do:

julia> fs = [x -> x + i for i in 1:10]
f10-element Array{##18#20{Int64},1}:

julia> fs[1].i

julia> fs[2].i

Each element of fs is just a lightweight instance of the same type (#18) with a different captured value of i. They all share the same compiled code.

On the other hand, if you were to create lots of new functions in a loop (you’d have to do something like call eval() in your loop to do this), then you would indeed run into trouble because the compiled code for those functions would not be garbage collected.


I knew that anonymous functions had been made fast, but I had no understanding of how they did it. That is a fantastic write-up of it, thank you very much. If you’re on StackOverflow, I’d be happy to post this as a question, and you can cut-and-paste your answer from here, since I think it is a great resource.