From reading the Performance Tips in the Julia manual, I grasped that type declarations generally improve the performance and abstract types should be avoided. So lets say I have a function
function func(x, n::Integer)
y = Real[]
for i = 1:n
push!(y, x^i)
end
return y
end
that I want to improve by replacing all abstract types:
function func(x, n::Integer)
y = Float64[]
for i = 1:n
push!(y, x^i)
end
return y
end
Now I can imagine cases, where the specific type Float64 might be non-ideal, e.g. when the code is executed on a GPU that likes to work with 32-bit input. In that case, I would want to replace every hard-coded Float64 wtih Float32 etc. How can I avoid doing the replacement depending on my use case. Would declaring my own type somewhere in the package useful, so that I need to replace it only once?
const MyFloat = Float64
# [...]
function func(x, n::Integer)
y = MyFloat[]
for i = 1:n
push!(y, x^i)
end
return y
end
I wolud like to hear your ideas for such a situation.
Read the performance tips again — that’s not what they say.
In particular, abstract types declarations for function arguments do not hurt performance. See argument-type declarations in the manual. Similarly for declaring return types or local-variable types.
This has no effect on performance compared to func(x::Real, y::Integer). It just makes your function less generic.
You could also just use y = typeof(x)[] or y = Vector{typeof(x}}(undef, 0). You don’t need to explicitly declare type parameters. This is also more flexible because you can compute types, e.g. typeof(float(x)^2) or calls to functions like promote_type.
The key idea in generic programming is to infer (either implicitly or explicitly) the types in your function from the types of the arguments.
Then you’ll need to make provisions to specify it manually. Something like
julia> function func(::Type{T}, n::Integer) where T
y = complex(T)[]
for k = 1:n
push!(y, exp(im * pi / k))
# EDIT: see comments below regarding doing the calculation with the target type
end
return y
end
func (generic function with 1 method)
julia> func(Float16, 8)
8-element Vector{ComplexF16}:
Float16(-1.0) + Float16(0.0)im
Float16(0.0) + Float16(1.0)im
Float16(0.5) + Float16(0.866)im
Float16(0.707) + Float16(0.707)im
Float16(0.809) + Float16(0.588)im
Float16(0.866) + Float16(0.5)im
Float16(0.901) + Float16(0.4338)im
Float16(0.924) + Float16(0.3826)im
(You could alternatively use function func(T::Type, n::Integer) for the definition, but there can sometimes be performance differences between these for complicated reasons including but not limited to this.)
Here, it sounds like you want to infer the type from the type of exp(im * pi / k) (which is more efficiently computed as cis(pi/k), by the way). One way would be:
y = Complex{typeof(cis(pi/one(n)))}[]
Another way would be to use map, which works out the type for you:
BTW, a side remark. For performance you normally want avoid operations that get gradually more expensive, which x^i does as i increases (even if it only increases logarithmically). Instead, you can do
Very insightful, thanks. Seems like one can infer more than I thought
But let us assume that I encounter a situation where I absolutely cannot infer the precision, hence in such a situation I would go for @mikmoore’s suggestion. Now suppose that there is a large simulation where this precision has to be set many times. Would I use a solution like MyFloat in my original question, which I set in the beginning and use everywhere, or is there a magic command where the precision can be set globally?
Why would have to be set “many times” as opposed to being passed to one entrypoint function and propagating from there to the rest of the computation?