I apologize ahead of time for resurrecting an old thread here, but I think a brief summary of this discussion might be helpful for posterity. There is a lot of confusion in this thread over semantics and definitions, which likely makes it rather time consuming and difficult to read for newcomers and non-CS folk. I am actually a computer scientist by training, so perhaps I can help a bit here.
Conclusion 1: Julia does have multiple dispatch, not just static overloading.
Put in the simplest terms, multiple dispatch is when functions/methods are called based on the runtime types of more than one argument. Contrary to what is claimed by @dataSurfer (OP), whether or not Julia’s compiler can infer these types during specialization is irrelevant. A simple example to illustrate this:
f(x::Real) = x + 1
f(2.0)
The static type of x
in f
is Real
. The runtime type is Float64
. In a language with static dispatch/overloading, +
would be invoked with an argument of type Real
(regardless of the runtime type of x
), whereas in Julia, it will be invoked with the runtime type of Float64
.
As pointed out by @StefanKarpinski and @yuyichao, there is no such concept of a static type in Julia, since all types are resolved at runtime, either via a dynamic type check or via type inference during compilation.
Conclusion 2: “Compile-time” in Julia has a different meaning than in static languages.
It seems to me like one of the more significant sources of confusion in this discussion was the precise meaning of compile-time vs runtime in Julia. The meaning of compile-time in Julia is fundamentally different than compile-time in C/C++, Java, or other statically typed languages. In Julia, compilation happens lazily (or “just in time”) and functions are compiled based on the specific runtime types supplied by the caller. In statically typed languages, compilation happens entirely ahead of time (before anything is executed) and functions are compiled based on their static types as declared by the programmer in the source code, full stop. Any further inlining or clever optimizations by the compiler are completely irrelevant to the actual semantics of the type system.
So this argument:
…is not correct because it confuses the two meanings of “compile time”. In C/C++, Java, etc, functions are not compiled “when you call them” but rather ahead of time based on the type information available in the source code.
Example of multiple dispatch vs single dispatch and static dispatch
Taking inspiration from @Oscar_Smith’s example, let’s consider a very simple implementation of ordinary least squares:
ols(x,y) = inv(x'x)x'y
This very naive implementation already works for any appropriately sized x
and y
, including the multivariate case where x
is a matrix or multiple-target case where y
is a matrix. Furthermore, if x
and y
have special types (e.g. Diagonal
, Tridiagonal
, SparseMatrixCSC
, etc.), we get the potentially optimized implementations of '
, *
, and inv
for free.
How would we accomplish this with single dispatch and/or static overloading?
Well first, we would need types on x
and y
, since the implicit type for a type-free parameter in Julia is Any
. With static overloading, the compiler (in a static language, not in Julia) would then try to find '
, *
,inv
methods with types Any
which of course should not generally be defined. So we would need to at least have:
ols(x::AbstractVector{T}, y::AbstractVector{T}) where {T} = ...
ols(x::AbstractMatrix{T}, y::AbstractVector{T}) where {T} = ...
ols(x::AbstractMatrix{T}, y::AbstractMatrix{T}) where {T} = ...
However, this is still not enough. In order to accomplish similar behavior with static overloading, we would need concrete overloads for each special type, e.g:
ols(x::Tridiagonal, y::AbstractVector{T}) where {T} = ...
ols(x::SparseMatrixCSC{T}, y::SparseVector{T}) where {T} = ...
# etc.. for all matrix/vector type combinations
This is not very scalable. Alternatively, if we wanted to leverage single dispatch, we would need to define the operations inv
,*
,'
as methods on each concrete matrix/vector subtype, and *
specifically would need to be implemented for each combination of vector-vector, matrix-vector, matrix-matrix, and this would need to be repeated for each specialized implementation.
It gets even worse when you need to handle the case where x
and y
both have special (structured) types, since you need to handle x*y
and y*x
(not for ols
, but in general) separately. Single dispatch is by nature not commutative.
Julia’s multiple dispatch therefore allows for an enormous amount of (often very efficient) synergy between types and implementations which do not necessarily know anything about each other. This simply isn’t possible with static overloading, and is often prohibitively difficult with single dispatch. For those of you who teach Julia to students, I would definitely recommend emphasizing this point!
I hope that my necromancy will be useful to future Discourse lurkers who stumble upon this thread If not, then I apologize! Happy coding!