RFC: Language Support for Traits — Yay or Nay?

As far as I know, none of those have the ambition of a HPC performance target (which julia definitely can do today). Adding refinement types means having to track the potential range of values for every single object, as well as what range of values a function would produce given an arbitrary input. This also makes dispatch much more complicated, because now you have to be extremely careful that e.g. the firstindex implementation on list is constant foldable (or at minimum, doesn’t throw errors on any input) to keep the compiler itself fast. All of that is a huge amount of work and is completely orthogonal to wishing to subtype more than one abstract type for dispatch reasons (“traits”), enforced by a constructor (e.g. declare being <: IsEven and only allow construction of an object if it actually iseven). IIRC, that is not a limitation for e.g. Haskell, due to being lazy in nature.

As Tim says, we’re not in the magical land of opportunity anymore, where we can just try new things that sound (and likely are!) nice - we have to work with what we got, and take careful consideration on whether it’s possible to integrate new ideas, as well as how to do that in the current framework. One thing that I’ve definitely heard being said is that we don’t want to be in a situation where we’re doing logic in the type system - at that point, we’d most likely have to add a SAT solver to inference just to make programs compile at all, which will 100% kill fast compile times (you thought TTFX was bad? :') )


Thank you for pinging,

WhereTraits translates the function call into three nested function calls

function outer_traits_func(arg1, arg2, ...) 
    inner_conflict_resolution(Trait1(arg1, arg2, ...), Trait2(arg1, arg2), ..., arg1, arg2, ...)

function inner_conflict_resolution(::Trait1_conflicting_value, ::Trait2_conflicting_value, ..., arg1, arg2, ...)
    inner_traits_func(Trait1_resolved_value, Trait2_resolved_value, ..., arg1, arg2)

function inner_traits_func(::Trait1_value (e.g. Val{true}), ::Trait2_value, ..., arg1, arg2, ...)
    # your implementation

EDIT: you can investigate this yourself by running WhereTraits.@traits_show_implementation f, mind that it will look a bit more chaotic but follows the same idea.

This is in general the optimal way to go, having nice function barriers and rely on compiler to resolve the correct dispatch. Still indeed there is currently a 10-20nanoseconds overhead happening, which I haven’t found time investigating yet.
There is an issue for it, please see Runtime overhead · Issue #22 · jolin-io/WhereTraits.jl · GitHub for updates. I plan to fix them before Juliacon.

If the issue is found and fixed, I would guess that in your example, there could still be an overhead as isodd is a dynamic function and hence forces a runtime dispatch which could need extra time. On the other hand static functions like isbits, which know there return value on compile time, would then be guaranteed to have optimal performance, as all method dispatches can be resolved at compile time.


So the TL;DR here @uniment is that this is slow because you have a runtime dispatch–the specific method you want to call can’t be inferred at compile time, and instead has to be evaluated at runtime. This is the same reason type instabilities make Julia code slow. If we were testing a compile-time predicate, e.g. isiterable(x::Array), you’d see no performance penalty.

It might be reasonable to make traits a special kind of function, that only accept types as arguments. The upside to this would be more consistent performance (you don’t have an extra footgun, like you do with type-unstable code). The downside would be flexibility, but honestly, I’m not sure there are many cases where this is necessary.

I brought up WhereTraits mostly for the very-intuitive syntax.

It doesn’t appear to be a runtime dispatch; in that particular test, it should’ve const-propped so I’m not sure what’s going on.

timing comparison - click to expand
julia> using WhereTraits
       @traits a(x) where isodd(x) = x+1
       @traits a(x) where !isodd(x) = x
       b(x) = isodd(x) ? x+1 : x
       c(x) = _c(Val(isodd(x)), x)
       _c(::Val{true}, x) = x+1
       _c(::Val{false}, x) = x
_c (generic function with 2 methods)

julia> @btime a(1)
       @btime a($1)
       @btime b(1)
       @btime b($1)
       @btime c(1) # const-prop saves it
       @btime c($1) # not sure why this is *slower* than a($1)
  25.703 ns (0 allocations: 0 bytes)
  25.402 ns (0 allocations: 0 bytes)
  1.000 ns (0 allocations: 0 bytes)
  1.800 ns (0 allocations: 0 bytes)
  1.100 ns (0 allocations: 0 bytes)
  53.781 ns (0 allocations: 0 bytes)

julia> @traits d(x) where isnothing(x) = 0
       @traits d(x) where !isnothing(x) = x
       e(x) = _e(Val(isnothing(x)), x)
       _e(::Val{true}, x) = 0
       _e(::Val{false}, x) = x
_e (generic function with 2 methods)

julia> @btime d(nothing)
       @btime d($nothing)
       @btime e(nothing)
       @btime e($nothing)
  24.949 ns (0 allocations: 0 bytes)
  25.253 ns (0 allocations: 0 bytes)
  1.000 ns (0 allocations: 0 bytes)
  1.100 ns (0 allocations: 0 bytes)

I’d argue such knowledge is impossible, but without diving too deep into epistemological weeds…

Perfect “ground truth” can’t be measured, but in cases like this I see basically two paths for an approximation: a) release a package and observe package popularity, and b) release language features marked “experimental,” introduced on a probationary basis for some assessment period. Which path is more appropriate will of course depend on the specifics of the problem and the specifics of the implementation; in the vast majority of cases the former approach is better, but occasionally the latter can be.

I see that PR mostly as a victim of the siren song of feature creep.

Fair. The distinction seemed pedantic—if we remove the bottom Union{}, then it’s a tree. But it does have the machinery and the bone structure, and if there were a way to make a type subtype multiple types then the lattice could fill out more richly. In some sense, it’s almost begging for multiple inheritance.

My thought was that it would reduce complexity. Namely, by specifying that each lattice is unrelated/orthogonal (there is no partial order between them), then each lattice could maintain the simplicity of single-inheritance. Multiple-inheritance would be expressed as subtyping strictly unrelated types from orthogonal hierarchies. Then, keeping track of how types interact would be straightforward: the Intersection of types from these unrelated hierarchies would always be irreducible.

It’s sort of like being a good/bad student, and being popular/unpopular on the dating scene: it might be easiest to express these as two independent groupings.

If there’s a nice way to express this on a single lattice, while keeping complexity controlled, I’m open to ideas. I’m not married to any of my ideas here.

I think you misunderstood the idea. The idea was simply to check the method signature against the function’s existing methods every time a method is declared, and if it creates an ambiguity, then throw an error and don’t declare the method. This will force the programmer to place method declarations in an order such that there is never an ambiguity.

A risk is that it could cause some valid signatures to be impossible to declare; I haven’t thought through all the possibilities to know if that could happen.

Even with such a feature though, it wouldn’t solve the problem that @tim.holy elaborates here. It would help me, as an individual author, avoid ambiguities in my own code (and my own method extensions), but loading other packages could cause breakage. This seems to be part of the blessing of the Holy trait pattern: its poor extensibility helps avoid breakage; by being less composable in the short run, it’s more composable in the long run.


So I had the tab open for Issue #5, and I can’t believe I hadn’t read it :sweat_smile:. It’s a roller coaster, defo recommended reading.

[Yet!] Another Idea:

  • What I like about WhereTraits.jl:

    • the ability to insert function calls into dispatch for yuge flexibility
    • b-e-a-utiful syntax
  • What I dislike:

    • predicate functions are allowed to cause runtime dispatch
    • boolean return values can’t leverage Julia’s native type specificity ranking system/ambiguity resolution techniques
    • calling predicates on instances in function signatures is incompatible with declaring standalone types—this makes function declaration semantics disjoint from type declarations

What if we embrace WhereTraits’ idea of inserting function calls into the where braces, but instead of giving them access to instances, give them access to typevars? Further, what if we disallow functions that return booleans, and only allow functions that return types? Staying in the type domain would allow traitful method declaration to maintain coherence with traitful type declaration, as well as leveraging the language’s type comparison machinery and never tempting people into runtime dispatch.

Imagine if this were a valid type specification:

T where T<:AbstractArray{E,N} where {E<:Float64,N} where {traits(T) >: Union{IndexCartesian, IsStrided}}

(Yup, I’m reverting to the idea of the OP—to express traits as a union :innocent:.)

I’m not sure what the internal representation would look like, but somehow it would store λ = T->traits(T) as something resembling a typevar, and the Union{...} as a lower bound. And calling <: or >: to compare this object against another type would cause λ to be called.

The method signatures of the OP would then look like this:

show(io::IO, ::T) where T where traits(T)>:HasDuckBill = print(io, "Probably a duck type...")
show(io::IO, ::T) where T<:Mammal where traits(T)>:HasDuckBill = print(io, "Maybe a platypus?")
show(io::IO, ::T) where T<:Mammal where traits(T)>:Union{HasDuckBill, LaysEggs}) = print(io, "Probably a platypus.")
show(io::IO, ::Platypus) = print(io, "Definitely a platypus!")


foo(A::AbstractArray, args...) = … # default for arrays
foo(A::T, args...) where T<:AbstractArray where traits(T)>:IndexCartesian = … # trait specialization
foo(A::T, args...) where T<:AbstractArray where traits(T)>:IsStrided = … # trait specialization
foo(A::T, args...) where T<:AbstractArray where traits(T)>:Union{IndexCartesian, IsStrided} = … # deeper trait specialization
foo(A::T, args...) where T<:AbstractMatrix where traits(T)>:Union{IndexCartesian, IsStrided} = … # deeper type specialization

I have an idea for how the mechanics of dispatch could work with this, but I’ll save it for later.

Interestingly, this approach is also pretty well-aligned with using Holy traits:

foo(A::T, args...) where T<:AbstractArray where IndexStyle(T)<:IndexCartesian = …

As a result, transitioning from the Holy traits pattern to a more generic style which calls traits should be fairly smooth. And for functions without trait methods, there would be no change; the mechanics of non-traitful dispatch would be unchanged.

Using the other example of the OP:

# function call
f(a, b, c, d)

# method signature
f(::A, ::B, ::C, ::TD) where {A,C,TD<:D} where {traits(C) >: CTr, traits(TD) >: DTr}

# this method is chosen if:
typeof(a) <: Any &&
typeof(b) <: B   &&
typeof(c) <: Any && traits(typeof(c)) >: CTr &&
typeof(d) <: D   && traits(typeof(d)) >: DTr

# (and, of course, if this is the most specific method 
#  that satisfies these constraints.)

And, to dispatch on the traits of a collection’s elements:

foo(A::AbstractArray{E}) where E<:ElType where traits(E)>:ElTraits = …

There are lots of details to work out, but it seems like this should do what we would want. Thoughts?

Thinking about this again, the potential for packages to create method ambiguities with each other has always existed and isn’t unique to traits. Problems are encountered, solutions are found and the ecosystem moves on. Is there good reason not to be optimistic here?