Multiple dispatch: Value and type methods

I’ve found it very handy to sometimes define things like

foo(x::T) = foo(T)

function foo(::Type{T}) where {T}

I’ve heard suggestions that this is bad style, but to me it seems like a useful idiom for cases where a function depends only on the type of its argument.

Am I missing something?


I also find it useful, especially for traits and it is also used in Base quite a bit. (eltype, Base.IndexStyle, …).

In julia 1.7 this pattern is bad for inference when combined with recursion however


Interesting, I had something similar, with

function basemeasure_depth(μ::M) where {M}

I was getting

julia> @inferred basemeasure_depth(Normal())
ERROR: return type Static.StaticInt{3} does not match inferred return type Any
 [1] error(s::String)
   @ Base ./error.jl:33
 [2] top-level scope
   @ REPL[118]:1

But the fix was easy:

@generated function basemeasure_depth(μ::M) where {M}

So now it’s

julia> @inferred basemeasure_depth(Normal())

This is a type-level recursion – some types have basemeasure_depth(::Type{T}) == static(0), and the rest count the number of simplifications steps they take to get to this. Maybe something similar could work for your case?

This pattern mixes the value level with the type level. There are valid cases for that, but I see it as a very specific style of programming. If I don’t especially need to combine them, I would give the type method a different name from the value method. Otherwise the semantics of the function become unclear. If it is simply a convenience API so users can pass “an instance or its type”, I think that’s just bad design. A function should have a clear and simple API. Converting non-equivalent objects into one another is a mess.

1 Like

classic example of this doesn’t make sense:

julia> zero(1)

julia> zero(Int)

julia> zero([1])
1-element Vector{Int64}:

julia> zero(Vector{Int64})
ERROR: MethodError: no meth

There’s never a need, more that it gets annoying to have different names for things that are conceptually the same. Of course it’s a bad idea to do this in cases where the semantics are unclear. But most programming idioms have exceptions; that doesn’t mean it’s universally a bad idiom.

I agree; in cases where I’d use this approach, it’s in part to make the API more clear and simple.

I don’t understand this at all. Lossily converting between inequivalent forms is the whole point of functions, which we use all over the place.

Sure, and there are plenty more. But there are also plenty of examples where the result of a function depends only on the type of the argument, and more importantly, where it makes semantic sense to consider something as a function of the type. In those cases, I don’t see a danger of adding a convenience function to forward calls on non-types to the type methods.

1 Like

absolutely agree :wink:

1 Like

Two objects of different types can be equivalent under some particular relation. For example, an Int32 and Int64 aren’t generically equivalent, but it’s no trouble to evaluate >(0::Int64)(x) by first converting x::Int32 to Int64. No problem there.

Could you give an example of some basic function where treating an object and its type both as valid inputs would make sense? Why not just pass the type if I only need the type? Or, why not just distingush between the two functions? For example,

handlevalue(x::T) = handletype(T)
handletype(T::Type{T}) = 123

Note these are different functions, not a mixture of two functions into one. If I mixed them into one function, I can’t easily describe what that function means.

The only application I know of that treats 42 and Int the same is in constraint lattice programming, where one restricts value a value x by declaring sets into which it must fall, such as x = int; x < 4; x = {2, 9}. But in this style, the value of x is actually not a number, but a set of numbers, which the program may convert from a singleton set into a number at the end of the process to select a satisfying value.


It’s common in Base to provide convenience methods for trait functions of the type described by OP.

I know some people have written functions this way, but I don’t see what it buys them. What’s wrong with different functions for different jobs?

valeltype(x) = eltype(typeof(x))
eltype(::Type{<:AbstractArray{E}}) where {E} = @isdefined(E) ? E : Any

Otherwise it feels like we’re back in numpy with automatic broadcasting where it’s unclear whether a call is lifted or not (a different lift in this case, but the same idea). In Julia we have a simpler semantics, and I think that’s good. In some functions here and there, Julia slips up and automatically lifts, but I don’t think we need more of that.

I like the distinction between “simple” and “easy” made in Rich Hickey’s famous talk “Simple Made Easy”.

1 Like

If we followed that dictum, we wouldn’t be able to use duck typing or multiple dispatch anymore. Every method of a generic function is a “different function for a different job”. I’d hate to have to write code like this:

foo_int(x::Int) = 1
foo_float(x::Float64) = 2
foo_string(x::String) = 3

I think the eltype(x) = eltype(typeof(x)) method is a perfectly reasonable behavior for the generic eltype function applied to an object.


In MeasureTheory.jl, we have lots of cases where one measure is defined in terms of another, so logdensity_def(m, x) gives the log-density with respect to basemeasure(m). But then we often want the density with respect to the “root measure”, which you can get by iterating basemeasure to a fix point.

Doing this dynamically is expensive, so I’ve added a basemeasure_depth that returns a StaticInt of the number of iterations to get to that fixpoint. It needs to be static, so we usually want to jump to the type level ASAP. These are conceptually close enough that having one name is just more convenient.

There is a potential problem though - if you define

basemeasure_depth(m::M) = basemeasure_depth(M)

it’s easy to accidentally end up forcing the compiler into an infinite recursion. The stack overflow is uglier than one you’d get at the value level, because it takes longer for it to figure out why it’s stuck.

1 Like

The one job of a generic function can be abstract but it should have a shared functional specification. Base.+ is addition. It has a coherent definition that is polymorphic across data types with a shared specification for types that form an algebraic structure having associativity, commutativity, and identity.

In this case I would rename one of them:

tbasemeasure_depth(::Type{M}) = ...
basemeasure_depth(m::M) = tbasemeasure_depth(M)


basemeasure_depth(::Type{M}) = ...
vbasemeasure_depth(m::M) = basemeasure_depth(M)

It sounds like you’re limiting yourself to parametric polymorphism. What about ad hoc polymorphism? What counts as a “coherent definition”? I can be as vague as I want to with the definition of my generic functions. I could write a package like this:

foo(args...; kwargs...)

Do something.
function foo end

# Every function in my package is a method of `foo`.

In other words, it’s a bit subjective how we decide to partition the space of functions into "methods of bar" and "not methods of bar".