Anonymous functions optimize just fine
julia> function makefun()
f1 = x->x+2
f2 = x->(f1(f1(x)))
return f2
end
makefun (generic function with 1 methods)
julia> aa = makefun();
julia> @code_typed aa(3)
CodeInfo(
│╻╷ #74 1 ─ %1 = (Base.add_int)(x, 2)::Int64
││╻ + │ %2 = (Base.add_int)(%1, 2)::Int64
│ └── return %2
) => Int64
julia> bb(x) = x+2+2;
julia> @code_typed bb(3)
CodeInfo(
│╻╷ +1 1 ─ %1 = (Base.add_int)(x, 2)::Int64
││┃ + │ %2 = (Base.add_int)(%1, 2)::Int64
│ └── return %2
) => Int64
Noted that aa
and bb
get lowered to exactly the same thing.
Is it feasible to use Flux.jl (or some other existing automatic differentiation package) on huge chains of anonymous functions like this?
Yes.
(well flux isn’t AD, but Flux will invoke Tracker.jl (or oneday Zygote.jl) and both have no issue with this.)