When I pass function around, it causes a huge memory allocation. Like the following:
- function f1(x::Float64)
16000000 x = x+3.
- end
-
- mutable struct MyType
- x::Float64
- func::Function
-
- function MyType(x::Float64)
2208 this = new()
- this.x = x
- this
- end
- end
-
- function main()
- t1 = MyType(3.)
0 t1.func = f1
-
0 for i=1:1000000
48000000 t1.x = t1.func(i*0.1)
- end
- end
But if I call it directly, there seems no problem:
- function main1()
- t1 = MyType(3.)
-
2176 for i=1:1000000
0 t1.x = f1(i*0.1)
- end
- end
Can anyone please help explain? How can I better pass function around?
function f1(x::Float64)
x = x+3.
end
mutable struct MyType{F<:Function}
x::Float64
func::F
funcs::Array{Function,1}
sf::Symbol
function MyType{F}(x::Float64) where {F<:Function}
this = new()
this.x = x
this.funcs = Array{Function,1}(undef,1)
this
end
end
function main()
t1 = MyType{Function}(3.)
t1.func = f1
for i=1:1000000
t1.x = t1.func(i*0.1)
end
end
function main1()
t1 = MyType{Function}(3.)
for i=1:1000000
t1.x = f1(i*0.1)
end
end
function main2()
t1 = MyType{Function}(3.)
f = f1
for i=1:1000000
t1.x = f(i*0.1)
end
end
function main3()
t1 = MyType{Function}(3.)
t1.sf = :f1
f = getfield(Main, t1.sf)
for i=1:1000000
t1.x = f(i*0.1)
end
end
function main4()
t1 = MyType{Function}(3.)
aDict = Dict{String, Function}()
aDict["f1"] = f1
f = get(aDict, "f1", 0)
for i=1:1000000
t1.x = f(i*0.1)
end
end
function main5()
t1 = MyType{Function}(3.)
f = include_string(Main, "f1")
for i=1:1000000
t1.x = f(i*0.1)
end
end
function main6(ff::Function)
t1 = MyType{Function}(3.)
for i=1:1000000
t1.x = ff(i*0.1)
end
end
function main7()
t1 = MyType{Function}(3.)
t1.funcs[1] = f1
for i=1:1000000
t1.x = t1.funcs[1](i*0.1)
end
end
I need to call a function among a list of functions. Some other routine determines which is the function to call, and so it needs to be passed from there.
The functions are defined in a couple of .jl files. They are quite simple and with the same signature. A routine selects one or a few based on some attributes that can be preset or by user input. Based on the selection, the other routine calls them. So in some way, I need to get the functions passed from one routine to the other.
Your explanation (not original post) doesn’t explain why you are putting a bunch of functions in a Vector and not
if x
call_foo()
elseif y
call_bar()
...
If you want this to be dynamic (and your functions are quick enough to not offset the dynamic dispatch) then something like GitHub - yuyichao/FunctionWrappers.jl is probably your best bet.
Easiest and pretty fast, but not 100% performant is:
function foo(list_of_functions::Vector, arg)
out = 0.0
for f in list_of_functions
out += f(arg)::Float64 # annotate the return type
end
out
end
If that is not sufficient and FunctionWrappers.jl is not working, then some lispy recursion on your functions (stored in a tuple) might be able to do the trick.
If there is any common pattern in the functions, I would suggest using a single function and parametrize on eg Val. For example,
function foo(::Val{:F1}, other, arguments)
...
end
function foo(::Val{:F2}, other, arguments)
...
end
If the other arguments have an irregular structure, pack them in a NamedTuple. Then just use Val{T}() to select. This could be returned by the function which decides the function to be called.
(as others have noted, more information and an MWE would of course allow more specific suggestions)
function f1(x::Float64)
return x + 3; ##x = x+3
end
struct MyType
x::Float64
func::Function
end ## Default constructor is MyType(x::Float64, func::Function)
function main()
t1 = MyType(3., f1)
for i=1:1000000
t1.x += t1.func(i*0.01) #t1.x = t1.func(i*0.1)
end
end
-add: there’s still a bit of funny business with the x =
and to be sure the compiler doesn’t optimize out the loop, re- 1 million iter in 1.6ns with main6()
Handy rule of thumb is that Ghz is ticks per nanosecond
You can test that for yourself using BenchmarkTools.jl @btime. But the answer, I assume, is no. Julia still cannot infer which function is held by t1.func and thus needs to box things and thus allocations will happen.