That’s not true. Julia will always specialize functions concrete types. In fact,
That is very ill-advised. You likely just want to do zca(o) = ...
since there is no performance gain to restricting it, and this kind of duck-typing allows users to find new maybe unintended uses (i.e. maybe it works on a “non-traditional number” like a ApproxFun Fun
, so maybe in this case you shouldn’t even T<:Number
and should instead leave it open!). Strict typing of functions is kind of a new user trap for this reason: it’s just for throwing errors, but has a feel that it might improve performance.
On the other hand, strictly typing for type fields DOES lead to better performance. So:
type Test
a
b
end
is bad (check @code_warntype
on something that uses the fields), while
type Test{A,B}
a::A
b::B
end
is good. That’s why there’s the mantra: strictly type your types, loosely type your functions.
So then why the extra notation for type variables? Well, two reasons. The first is for matching types.
f{T<:Number}(a::T,b::T)
is different from
f(a::Number,b::Number)
because it forces a
and b
to not only be Number
s, but the same subtype of Number
. This restriction is for dispatch, not for performance. Lots of times in this case you’d have a different dispatch promote the types to something compatible, which would then call this “same number type” dispatch.
Secondly, within this function we already know T
. This can be handy. Of course, you can likely just use typeof(a)
(and this will likely be a no-op, i.e. it will also be inferred at compile time so there’s no extra function evaluation cost) but this still might be handy.
So it’s about dispatch control, not performance (when on a function).