RFC: Language Support for Traits — Yay or Nay?

Are you thinking of


That looks about right, yes! Good to know that built-up intuition about where our type system has holes leads to existing theory :smiley: Now integrating that theory (and, if possible, in a backwards compatible way) is the hard part… I still like to think Join is a good name for this, since in principle Join{Int, Float16} <: typejoin(Int, Float16) should hold as well, I think.

I usually think of “join” as corresponding to union and “meet” as corresponding to intersection. Julia’s typejoin may be using “join” in some other sense I’m not familiar with.

Yes, that is the usual terminology and I agree that calling it Join is confusing in that sense - it wasn’t meant as a serious suggestion as a name either way, Meet is just as good (and delightfully concise) :slight_smile: The name is yak shaveable.

1 Like

I’ve been chewing on this for a minute and woke up this morning with this exact thought! Glad I’m not crazy :grin:

I think that this captures essence of what people sometimes clumsily complain about Julia’s type system – “I want to inherit from two types.”

You don’t need a separate construct for traits, I think. Just a smart way to pass on behavior from multiple supertypes.

1 Like

These are essentially like interfaces in Java. Our abstract types are essentially the same as Java interfaces.

In your case though, does order of the multiple subtyping matter?

I think you’re right. I’m trying to express a position in an object type hierarchy and the possession of a set of traits; the solution to my dissatisfaction is to find a more concise way to express this, and the most concise expression is as an intersection—a construct that Julia simply doesn’t have.

I think it could be backwards-compatible. Simply put, instead of x::X asserting typeof(x)<:X, and instead of having methods dispatch on typeof, imagine if we instead had x::X assert traitsof(x)<:X and methods dispatch on traitsof.

Maybe it could work like this:

Intersecting Types and Traits

First, introduce a new type. You called it Meet, but to keep the spirit of shaven yaks alive I’ll call it Intersection. In a tree-shaped type hierarchy, the notion of an intersection is useless; Intersection{Signed, Number} is Signed, and Intersection{String, Int} is Union{}. With such a type system, there’s simply no way that Intersection could be useful (hence we don’t have it). But let’s introduce it.

Next, introduce the notion of multiple hierarchies. Currently we have only one: the hierarchy of objects with a top of Any and a bottom of Union{}, and all types declared in all modules share this hierarchy. A trait such as Growable doesn’t belong on this hierarchy—growability isn’t a category of thing; it’s a property that belongs on its own plane of existence.

To express this, maybe we could have a “type module” to declare a completely separate hierarchy:

type module Growable
    struct Is end
    struct IsNot end
Growable.Is <: Growable.Any # true
Main.Any != Growable.Any # true
Intersection{Growable.Is, Growable.IsNot} # Growable.Union{}

With orthogonality assumed, the intersection of a type in the Growable hierarchy with a type in the object hierarchy will be irreducible—i.e., Intersection{AbstractString, Growable.Is} cannot be reduced to Union{}, but instead must remain Intersection{AbstractString, Growable.Is}. In fact, every trait should exist in its own hierarchy, ideally orthogonal to every other trait, so that intersections with multiple traits can be formed.

Let’s finalize the specification of Intersection. Like Union and Tuple, Intersection types are covariant in their parameters, so that if T1<:T2 and MyTrait.TA<:MyTrait.TB, then Intersection{T1, MyTrait.TA} <: Intersection{T2, MyTrait.TB}. Also, like Union, order doesn’t matter.

Next, imagine if we have these definitions:

traitsof(x) = Intersection{typeof(x), traits(typeof(x))}
traits(::Type{T}) where T = Any 

As before, traits could be a special function, in that it dispatches on its argument’s typeof instead of traitsof.

For a type that doesn’t have any traits, you can see how traitsof(x) reduces to typeof(x), and therefore x::X is backward-compatible. And if x now becomes traitful, but X is still just a supertype, x isa X remains true and x::X still asserts correctly.

Then, to add traits to a type, we can perform a trick similar to the OP:

add_traits(T, Tr) = let Trs = Intersection{traits(T), Tr}
    eval(:( traits(::Type{var"#T"}) where var"#T"<:$T = $Trs ))

I think we have all the pieces we need now. With this, function dispatch could work like this:

# function call
f(a, b, c, d)

# method signature
f(::Any, ::B, ::CTrait.Tr, ::Intersection{D, DTrait.Tr})

# this method is chosen if:
traitsof(a) <: Any        &&
traitsof(b) <: B          && 
traitsof(c) <: CTrait.Tr  &&
traitsof(d) <: Intersection{D, DTrait.Tr}

# (and, of course, if this is the most specific method 
#  that satisfies these constraints.)

Here’s a question: what should typeof(Growable.Is) be? Should Growable.Is isa Type? If so, then maybe we can make this shorter with ∩(A::Type, B::Type) = Intersection{A, B}.

Are there any concerns with this? Would it work? Did I miss something? Can I choose better function names?

Taking a stab at method ambiguities

One thing I’ve noticed about method ambiguities is that anecdotally it seems like they don’t need to exist. Namely, ambiguities are introduced and resolved, but if methods were written in a different order (i.e., starting with the method that solved the ambiguity) the ambiguity would never have existed.

Maybe we can leverage Julia’s dynamic nature—which gets in the way sometimes by forcing method definitions to be sprinkled around type declarations (and vice versa) because of the linear evaluation order—and put it to use here. Namely, every time a method is declared, we could check if it creates any ambiguities with existing methods and throw an error if it does. That would alleviate concerns over method ambiguities by making them strictly impossible to begin with.

Is it generally true that ambiguities can be avoided throughout the course of declaring a function’s methods by properly ordering evaluation? Or am I wrong—would throwing errors on ambiguous method declarations cause some valid method signatures to be unreachable?

1 Like

That’s the same thing. In julia (& with multiple dispatch) , while we declare subtype relations explicitly, in practice that is only really true if we can really use the subtype in every place where we’d expect (at most) the supertype. I.e., given a T <: S and a function foo(::S), the declared relationship T <: S doesn’t really matter if foo(::S) then doesn’t succeed (while it’s of course nice to declare your intent, right now there is no explicit layer in the compiler checking for that & warning you ahead of time, aside from JET.jl). You can say that the type S “possesses” the trait “Passable as the sole argument to foo” (which is really all that’s needed). In general, this definition of subtyping is called the Liskov Substitution Principle. From this, it follows naturally that declaring T <: Meet{S, R} means "T MUST be able to be used in any place that we’d expect either S or R (or get an ambiguity error, if e.g. both foo(::S) and foo(::R) but no foo(::T) exist).

Trivially, if we have T <: S then Meet{T, S} simplifies just to T for all S for which T <: S holds. Meet{Union{}, T} always simplifies to Union{}, because Union{} <: T is always true. Additionally, Union{Meet{}, T} becomes just T again. (Try and plug in various types of the chain Union{} <: Int <: Integer <: Any and see what you get - in your head with Meet and in the REPL with Union).

Julia does not have a tree shaped hierarchy - it’s a partially ordered lattice, with typejoin being the join operation and typeintersect being the meet operation. Thus, Any is the Top of that lattice (there are no “greater” elements in the partial order) and Union{} is the Bottom of that lattice (there are no “smaller” elements in the partial order).

That’s where the tricky cases come in - what about typejoin(Meet{}, Any)? Or typeintersect(Meet{}, Union{})? Should they return Any and Union{} respectively, delegating Meet to just an intermediary? I think so, but that’s part of what I haven’t worked out yet. Other parts are what should happen with typejoin(Meet{S, T}, Meet{T, R}) or typeintersect(Meet{S, T}, Meet{R, Q}) (I suspect it would be T and Union{} respectively - haven’t worked it out yet).

Similarly, since we can’t subtype concrete types, it doesn’t really make sense to form a Meet{Int, T} - that then ought to be disallowed as well, only allowing Meet to be used with abstract types in its parameters.

Having arrived at this, we can view Meet in a different light - allowing multiple abstract inheritance of julia behavior, because abstract types are already our interface definitions (possibly extended by concrete structs - concrete structs that directly subtype Any are a bit of a PITA, but luckily strengthening the guarantees of the direct super type is not a breaking change generally speaking).

This is not necessary and introduces unnecessary complexity in the type lattice. Again, there is no hierarchy - it’s a partially ordered lattice. What you’re proposing would mean having multiple type lattices (i.e. multiple type systems) in one language, effectively splitting the type system in two. Now you have a meta-problem - partially ordering those lattices and resolving conflicts & ambiguities between them.

Yes, but that would mean requiring julia to typecheck function signatures ahead of time for all possible calls, which is undesirable in a dynamic language (in fact, that’s how statically typed languages work - you get a compiletime error when there’s a call that is ambiguous, instead of a runtime ambiguity error. The method may have been added in a language with global eval, after all).

Definitely, but I think implementing a nice trait system first would be substantially easier. Rewriting Base to take advantage of it everywhere would definitely be breaking and would also be a lot harder. I’d save it for whenever there’s more resources and a desire for a 2.0 release.

I’d also like to mention my current favorite implementation of traits–WhereTraits.jl.

1 Like

Definitely, but I think implementing a nice trait system first would be substantially easier. Rewriting Base to take advantage of it everywhere would definitely be breaking and would also be a lot harder.

That, however, may be where the bar is now. (Not the breaking part—the art would be doing what can be done without being breaking—but in terms of “a lot of work”.) Now that we’re in the 1.x stable release series, there’s well-justified reluctance to substantial new features without there being a “consumer”: how else do you know whether you’ve gotten the design right, and that the change yields real benefits without causing problems elsewhere? Surely you’ve noticed this in the discussion of https://github.com/JuliaLang/julia/pull/24990? This is kind of a more refined version of Julia is not at that stage of development anymore–deep changes are now harder. That’s the price of Julia’s success!


I don’t understand WhereTraits.jl enough to know what’s going on here. Is this performance a fundamental architectural limitation, or is it just an artifact of its current state of implementation?

julia> using WhereTraits
       @traits f(a) where {isodd(a)} = (a+1)/2
       @traits f(a) where {!isodd(a)} = a/2

julia> @btime f(2)
       @btime f(3)
  25.911 ns (0 allocations: 0 bytes)
  25.402 ns (0 allocations: 0 bytes)

For comparison:

julia> g(a) = isodd(a) ? (a+1)/2 : a/2
       @btime g(2)
       @btime g(3)
  2.300 ns (0 allocations: 0 bytes)
  2.300 ns (0 allocations: 0 bytes)

I get the appeal of wanting to dispatch on arbitrary predicates, but I’m not yet sure if it’s worth it. Totally open to being talked into it though.

Hmm, at a glance that looks a bit similar to what refinement types would look like (something I think might be a nice fit with Julia).


Refinement types would allow for something like:

function somescore(val::Int) where {0 <= val <= 60 && iseven(val)}
  # stuff

function getval(list::Vector, index::Int) where {firstindex(list) <= index <= lastindex(list)}
  # stuff

What’s nice about a refinement type system is that it can be checked that a function will respect its contracts at compile time. I’d imagine that this functionality would go quite some way to helping ensure the correctness of programs too.

For further reading, I’d recommend giving https://arxiv.org/pdf/2010.07763.pdf a look.

Racket, Ruby, Scala, and Haskell have all had refinement types added on top, which gives me hope that this actually could be viable for Julia.


@schlichtanders would know best.

1 Like

As far as I know, none of those have the ambition of a HPC performance target (which julia definitely can do today). Adding refinement types means having to track the potential range of values for every single object, as well as what range of values a function would produce given an arbitrary input. This also makes dispatch much more complicated, because now you have to be extremely careful that e.g. the firstindex implementation on list is constant foldable (or at minimum, doesn’t throw errors on any input) to keep the compiler itself fast. All of that is a huge amount of work and is completely orthogonal to wishing to subtype more than one abstract type for dispatch reasons (“traits”), enforced by a constructor (e.g. declare being <: IsEven and only allow construction of an object if it actually iseven). IIRC, that is not a limitation for e.g. Haskell, due to being lazy in nature.

As Tim says, we’re not in the magical land of opportunity anymore, where we can just try new things that sound (and likely are!) nice - we have to work with what we got, and take careful consideration on whether it’s possible to integrate new ideas, as well as how to do that in the current framework. One thing that I’ve definitely heard being said is that we don’t want to be in a situation where we’re doing logic in the type system - at that point, we’d most likely have to add a SAT solver to inference just to make programs compile at all, which will 100% kill fast compile times (you thought TTFX was bad? :') )


Thank you for pinging,

WhereTraits translates the function call into three nested function calls

function outer_traits_func(arg1, arg2, ...) 
    inner_conflict_resolution(Trait1(arg1, arg2, ...), Trait2(arg1, arg2), ..., arg1, arg2, ...)

function inner_conflict_resolution(::Trait1_conflicting_value, ::Trait2_conflicting_value, ..., arg1, arg2, ...)
    inner_traits_func(Trait1_resolved_value, Trait2_resolved_value, ..., arg1, arg2)

function inner_traits_func(::Trait1_value (e.g. Val{true}), ::Trait2_value, ..., arg1, arg2, ...)
    # your implementation

EDIT: you can investigate this yourself by running WhereTraits.@traits_show_implementation f, mind that it will look a bit more chaotic but follows the same idea.

This is in general the optimal way to go, having nice function barriers and rely on compiler to resolve the correct dispatch. Still indeed there is currently a 10-20nanoseconds overhead happening, which I haven’t found time investigating yet.
There is an issue for it, please see Runtime overhead · Issue #22 · jolin-io/WhereTraits.jl · GitHub for updates. I plan to fix them before Juliacon.

If the issue is found and fixed, I would guess that in your example, there could still be an overhead as isodd is a dynamic function and hence forces a runtime dispatch which could need extra time. On the other hand static functions like isbits, which know there return value on compile time, would then be guaranteed to have optimal performance, as all method dispatches can be resolved at compile time.


So the TL;DR here @uniment is that this is slow because you have a runtime dispatch–the specific method you want to call can’t be inferred at compile time, and instead has to be evaluated at runtime. This is the same reason type instabilities make Julia code slow. If we were testing a compile-time predicate, e.g. isiterable(x::Array), you’d see no performance penalty.

It might be reasonable to make traits a special kind of function, that only accept types as arguments. The upside to this would be more consistent performance (you don’t have an extra footgun, like you do with type-unstable code). The downside would be flexibility, but honestly, I’m not sure there are many cases where this is necessary.

I brought up WhereTraits mostly for the very-intuitive syntax.

It doesn’t appear to be a runtime dispatch; in that particular test, it should’ve const-propped so I’m not sure what’s going on.

timing comparison - click to expand
julia> using WhereTraits
       @traits a(x) where isodd(x) = x+1
       @traits a(x) where !isodd(x) = x
       b(x) = isodd(x) ? x+1 : x
       c(x) = _c(Val(isodd(x)), x)
       _c(::Val{true}, x) = x+1
       _c(::Val{false}, x) = x
_c (generic function with 2 methods)

julia> @btime a(1)
       @btime a($1)
       @btime b(1)
       @btime b($1)
       @btime c(1) # const-prop saves it
       @btime c($1) # not sure why this is *slower* than a($1)
  25.703 ns (0 allocations: 0 bytes)
  25.402 ns (0 allocations: 0 bytes)
  1.000 ns (0 allocations: 0 bytes)
  1.800 ns (0 allocations: 0 bytes)
  1.100 ns (0 allocations: 0 bytes)
  53.781 ns (0 allocations: 0 bytes)

julia> @traits d(x) where isnothing(x) = 0
       @traits d(x) where !isnothing(x) = x
       e(x) = _e(Val(isnothing(x)), x)
       _e(::Val{true}, x) = 0
       _e(::Val{false}, x) = x
_e (generic function with 2 methods)

julia> @btime d(nothing)
       @btime d($nothing)
       @btime e(nothing)
       @btime e($nothing)
  24.949 ns (0 allocations: 0 bytes)
  25.253 ns (0 allocations: 0 bytes)
  1.000 ns (0 allocations: 0 bytes)
  1.100 ns (0 allocations: 0 bytes)

I’d argue such knowledge is impossible, but without diving too deep into epistemological weeds…

Perfect “ground truth” can’t be measured, but in cases like this I see basically two paths for an approximation: a) release a package and observe package popularity, and b) release language features marked “experimental,” introduced on a probationary basis for some assessment period. Which path is more appropriate will of course depend on the specifics of the problem and the specifics of the implementation; in the vast majority of cases the former approach is better, but occasionally the latter can be.

I see that PR mostly as a victim of the siren song of feature creep.

Fair. The distinction seemed pedantic—if we remove the bottom Union{}, then it’s a tree. But it does have the machinery and the bone structure, and if there were a way to make a type subtype multiple types then the lattice could fill out more richly. In some sense, it’s almost begging for multiple inheritance.

My thought was that it would reduce complexity. Namely, by specifying that each lattice is unrelated/orthogonal (there is no partial order between them), then each lattice could maintain the simplicity of single-inheritance. Multiple-inheritance would be expressed as subtyping strictly unrelated types from orthogonal hierarchies. Then, keeping track of how types interact would be straightforward: the Intersection of types from these unrelated hierarchies would always be irreducible.

It’s sort of like being a good/bad student, and being popular/unpopular on the dating scene: it might be easiest to express these as two independent groupings.

If there’s a nice way to express this on a single lattice, while keeping complexity controlled, I’m open to ideas. I’m not married to any of my ideas here.

I think you misunderstood the idea. The idea was simply to check the method signature against the function’s existing methods every time a method is declared, and if it creates an ambiguity, then throw an error and don’t declare the method. This will force the programmer to place method declarations in an order such that there is never an ambiguity.

A risk is that it could cause some valid signatures to be impossible to declare; I haven’t thought through all the possibilities to know if that could happen.

Even with such a feature though, it wouldn’t solve the problem that @tim.holy elaborates here. It would help me, as an individual author, avoid ambiguities in my own code (and my own method extensions), but loading other packages could cause breakage. This seems to be part of the blessing of the Holy trait pattern: its poor extensibility helps avoid breakage; by being less composable in the short run, it’s more composable in the long run.


So I had the tab open for Issue #5, and I can’t believe I hadn’t read it :sweat_smile:. It’s a roller coaster, defo recommended reading.

[Yet!] Another Idea:

  • What I like about WhereTraits.jl:

    • the ability to insert function calls into dispatch for yuge flexibility
    • b-e-a-utiful syntax
  • What I dislike:

    • predicate functions are allowed to cause runtime dispatch
    • boolean return values can’t leverage Julia’s native type specificity ranking system/ambiguity resolution techniques
    • calling predicates on instances in function signatures is incompatible with declaring standalone types—this makes function declaration semantics disjoint from type declarations

What if we embrace WhereTraits’ idea of inserting function calls into the where braces, but instead of giving them access to instances, give them access to typevars? Further, what if we disallow functions that return booleans, and only allow functions that return types? Staying in the type domain would allow traitful method declaration to maintain coherence with traitful type declaration, as well as leveraging the language’s type comparison machinery and never tempting people into runtime dispatch.

Imagine if this were a valid type specification:

T where T<:AbstractArray{E,N} where {E<:Float64,N} where {traits(T) >: Union{IndexCartesian, IsStrided}}

(Yup, I’m reverting to the idea of the OP—to express traits as a union :innocent:.)

I’m not sure what the internal representation would look like, but somehow it would store λ = T->traits(T) as something resembling a typevar, and the Union{...} as a lower bound. And calling <: or >: to compare this object against another type would cause λ to be called.

The method signatures of the OP would then look like this:

show(io::IO, ::T) where T where traits(T)>:HasDuckBill = print(io, "Probably a duck type...")
show(io::IO, ::T) where T<:Mammal where traits(T)>:HasDuckBill = print(io, "Maybe a platypus?")
show(io::IO, ::T) where T<:Mammal where traits(T)>:Union{HasDuckBill, LaysEggs}) = print(io, "Probably a platypus.")
show(io::IO, ::Platypus) = print(io, "Definitely a platypus!")


foo(A::AbstractArray, args...) = … # default for arrays
foo(A::T, args...) where T<:AbstractArray where traits(T)>:IndexCartesian = … # trait specialization
foo(A::T, args...) where T<:AbstractArray where traits(T)>:IsStrided = … # trait specialization
foo(A::T, args...) where T<:AbstractArray where traits(T)>:Union{IndexCartesian, IsStrided} = … # deeper trait specialization
foo(A::T, args...) where T<:AbstractMatrix where traits(T)>:Union{IndexCartesian, IsStrided} = … # deeper type specialization

I have an idea for how the mechanics of dispatch could work with this, but I’ll save it for later.

Interestingly, this approach is also pretty well-aligned with using Holy traits:

foo(A::T, args...) where T<:AbstractArray where IndexStyle(T)<:IndexCartesian = …

As a result, transitioning from the Holy traits pattern to a more generic style which calls traits should be fairly smooth. And for functions without trait methods, there would be no change; the mechanics of non-traitful dispatch would be unchanged.

Using the other example of the OP:

# function call
f(a, b, c, d)

# method signature
f(::A, ::B, ::C, ::TD) where {A,C,TD<:D} where {traits(C) >: CTr, traits(TD) >: DTr}

# this method is chosen if:
typeof(a) <: Any &&
typeof(b) <: B   &&
typeof(c) <: Any && traits(typeof(c)) >: CTr &&
typeof(d) <: D   && traits(typeof(d)) >: DTr

# (and, of course, if this is the most specific method 
#  that satisfies these constraints.)

And, to dispatch on the traits of a collection’s elements:

foo(A::AbstractArray{E}) where E<:ElType where traits(E)>:ElTraits = …

There are lots of details to work out, but it seems like this should do what we would want. Thoughts?

Thinking about this again, the potential for packages to create method ambiguities with each other has always existed and isn’t unique to traits. Problems are encountered, solutions are found and the ecosystem moves on. Is there good reason not to be optimistic here?