Hi all, this is my first post. Sorry if there’s a similar topic, I couldn’t find it.
I am curious about the use case below.
I would like to define a function as such:
julia> function f(funs::Array{Function,1})
# do something for each function
end
f (generic function with 1 method)
But I am running into the issue below. The type Array{Function,1} is more specific then I had anticipated. Can anyone advise why this is the case? And additionally a better way to code to avoid this?
julia> t->t :: Function
(::#1) (generic function with 1 method)
julia> [t->t] :: Array{Function,1}
ERROR: TypeError: typeassert: expected Array{Function,1}, got Array{##3#4,1}
The problem is the same when using anonymous functions, or built-in functions. Also running into the same problem with Interpolation objects (from Interpolations package). All help is much appreciated.
My version info:
julia> versioninfo()
Julia Version 0.5.0
Commit 3c9d753 (2016-09-19 18:14 UTC)
Platform Info:
System: Darwin (x86_64-apple-darwin13.4.0)
CPU: Intel(R) Core(TM) i7-5557U CPU @ 3.10GHz
WORD_SIZE: 64
BLAS: libopenblas (USE64BITINT DYNAMIC_ARCH NO_AFFINITY Haswell)
LAPACK: libopenblas64_
LIBM: libopenlibm
LLVM: libLLVM-3.7.1 (ORCJIT, broadwell)
Although I suspect you’re hitting several distinct issues at once, the first thing to do is to make sure you understand what invariance means in this section of the manual: http://docs.julialang.org/en/release-0.5/manual/types/
Function is an abstract type, each function has it’s own type, and an array parameterized with an abstract type can’t hold concrete subtypes of that type (see johnmyleswhite’s link).
You can do
julia> function f{T<:Function}(fs::Vector{T})
# do stuff
end
f (generic function with 1 method)
but it won’t have any better performance than leaving out the type parameter altogether.
Also you may be interested in the convenient vectorization syntax, so which lets you just write a function on a single element, and easily apply it to each element in a vector.
julia> function printfunc(f::Function)
println(f)
end
printfunc (generic function with 1 method)
julia> printfunc.([sin,cos,tan]);
sin
cos
tan
Here’s a better use case example as well as a solution that I came up from reading that link. I am looking to define several functions based on parameters x.
Setup:
julia> x = rand(10);
julia> g(t,f :: Array{Function,1}) = begin
f[1](t)
# other calculations
end
g (generic function with 1 method)
Doesn’t work:
julia> f = map(x->(y->x+y),x);
julia> g(0.0,f);
ERROR: MethodError: no method matching g(::Float64, ::Array{##156#158{Float64},1})
Closest candidates are:
g(::Any, ::Array{Function,1}) at REPL[79]:1
Does work:
julia> f = Array{Function,1}(length(x));
julia> map!(x->(y->x+y),f,x);
julia> g(0.0,f)
0.4458074408668764
The main reasons I like using parameter type checking are:
to utilize multiple dispatch
to catch errors as they happen (peace of mind)
That being said, in the above implementation are there no performance gains? Would love to hear suggestions on the Julia idiomatic way to do this.
Try using Function[t->t]. As mentioned above, Function is an abstract type and [t->t] will construct and Array with the tightest type possible in your example, that is an Array{typeof(t->t),1}.
Also, you might want to consider using a tuple of functions to store them, if that is at all possible (e.g. a smallish number of functions), since this would let the compiler know the individual function "type"s and therefore their code.
As it is, calling a function in the array would result in dynamically determining the correct specialization of multiple-dispatch to call, which (if inside an inner loop or whatever) might be a bit slower than you’d expect (keep in mind that these functions are not like standard C function pointers - though that reminds me that another way of obtaining speed is to call cfunction and store an array of such function pointers, and use ccall to access them, but this won’t work for “generic” code…).
All of this is only important for speed - functionally, they are all similar.
You could put a type annotation on the call site – that should improve the inference situation at the cost of a fairly cheap runtime check on the return value.