Will Julia (under the hood) in the future fuse the loops of calls such as
map(x -> x + 1, map(x -> x * 2, [1, 2, 3]))
into a single loop, requiring only a single temporary element, regardless of the size of x? Similar to what D’s lazy ranges (std.algorithm.iteration.map) and Rust’s iterators does?
Further it would be cool if it could optimize away any temporary allocations at all in cases such as
reduce(x -> x + 1, map(x -> x * 2, [1, 2, 3]))
without the user being aware of it, as he or she has to be in D, Rust and similar where lazyness is explicit to the programmer.
Ahh, I see now that the Julia already has this optimization because of symbol _mapreduce occurring in the error message:
ERROR: MethodError: no method matching (::getfield(Main, Symbol("##41#43")))(::Int64, ::Int64)
Closest candidates are:
#41(::Any) at REPL[10]:1
Stacktrace:
[1] _mapreduce(::typeof(identity), ::getfield(Main, Symbol("##41#43")), ::IndexLinear, ::Array{Int64,1}) at ./reduce.jl:311
[2] _mapreduce_dim(::Function, ::Function, ::NamedTuple{(),Tuple{}}, ::Array{Int64,1}, ::Colon) at ./reducedim.jl:305
[3] #mapreduce#538 at ./reducedim.jl:301 [inlined]
[4] mapreduce at ./reducedim.jl:301 [inlined]
[5] #reduce#539 at ./reducedim.jl:345 [inlined]
[6] reduce(::Function, ::Array{Int64,1}) at ./reducedim.jl:345
[7] top-level scope at none:0
! Cool!
What kinds of mappings (computation graph node merges) other than
map-reduce to mapreduce
is the optimizer capable of performing?