Function Parameter Speculation #17168

(continuing this from make Method objects callable (and inferred) as invoke · Issue #17168 · JuliaLang/julia · GitHub)

Haskell’s lack of “dynamic” multiple dispatch is because it’s a static language

There’s no formal definition of what characterizes a static language, but I don’t think this necessarily part of it. Julia can be defined to fit into the “static language” category[1] (it can be compiled to native code without need for an interpreter) and has multiple dispatch. Java doesn’t have multiple dispatch and isn’t a static language (it compiles to bytecode and is JIT compiled on-the-fly once the dynamic types are discovered). You could say that Haskell’s lack of dynamic dispatch is because they are static-dispatch-based languages, but that’s again just tautological. Yep, still agree that comparison is beneficial, just trying to frame the questions and observations.

Looking through the Rust info there, I found the following RFCS seems to be the most comprehensive at discussing the design limitations chosen there: Add RFC for a more flexible, revised trait matching system by nikomatsakis · Pull Request #48 · rust-lang/rfcs · GitHub. My understanding is that since Rust performs type-erasure, its rules ensure that the attempting to perform dynamic dispatch is a compile error, such that only cases where the static and dynamic dispatch overlap are accepted as valid programs. Julia uses that same computation internally for many optimizations, but doesn’t enforce it. Some have suggested similar ideas for Julia, although currently it’s just understood to be a recommendation (“when implementing methods, at least one type should be unique to the package to avoid method overwriting conflicts and limit ambiguities.”).

My conjecture is that multiple dynamic dispatch can be thought of as a Trait, and that doing so enables both functions and methods to be first-class types: abstract and concrete, respectively.

I think that’s essentially the the concept behind the proposal for type interfaces or protocols (https://github.com/JuliaLang/julia/issues/6975#issuecomment-44502467). For example, if we could say that providing an implementation for eltype and getindex is one protocol for an Indexable trait. Whether this is formalism or convention (and even what it means to implement a function for a given argument type) can be significant part of language design and target audience. So, what would you conclude from that conjecture?

[1] Since this question comes up from time to time, I’ve written about this in the past in a blog post on the subject of classifying static languages: http://juliacomputing.com/blog/2016/02/09/static-julia.html

4 Likes

First off, thanks for the awesome work on static analysis and AOT compiled Julia! Looks like you’re working on the three language problem–expressiveness of python, speed of C, and static analysis / safety of Haskell. :grin:

In pure math there is no concept of a dynamic value. Because Haskell is a pure functional language that uses Hindley–Milner type inference, there is no need for dynamic dispatch as there are no dynamic values.[1] In the Haskell world-view, dynamic values come from outside world–reading a file, user input, streaming data, a random number, etc.–hence the IO monad to bind dynamic values to static values. Following an IO, static multiple dispatch can take place. Note that IO has nothing to do with dispatching per se, just one of many uses. This purity makes computations involving random numbers surprisingly involved and has a number of other undesirable tradeoffs for scientific computing.

Thanks for diving into Rust. Makes sense that its implementation is too restricted for Julia.

I love the idea of Interfaces. They also require functions to have types. I’m strongly in support of first-class function types in Julia for reasons described on the Github thread. Given that dynamic multiple dispatch seems to be a foundational idea of Julia, I assume this doesn’t change and ask the following questions:

  1. What is the type of a method?
  2. What is the type of a function?
  3. How do the types of methods and functions relate?
  4. How do multiple dispatch, traits, interfaces, and types all relate?
  5. Are the answers we come up with implementable and generally compatible with idiomatic Julia code?

Some observations:

  1. Given that methods are associate with argument types, so should the method’s type. I hold as self-evident that the return type should be included too. So following StefanKarpinski’s lead this is f :: A → B in Julia

  2. This is a hard question. It seems related to type classes, traits and/or interfaces. Alternatively, when viewed as a “type of types,” that suggests this is a kind e.g. * in Haskell or Any in Julia.

  3. Methods have a dependency relationship on Function

  4. Strategically, we should look for a common abstraction. Haskell provides a nice example of how type classes give us all of these.

  5. I’ll defer this one!

[1] this is not strictly true as there is a type Dynamic, but this is hardly used and not idiomatic. So it turns out there is in fact dynamic multiple dispatch in Haskell.

1 Like

Re 1,2,3: I think the more useful delineation is 4-ish.

You would need to write the type of a method as something like f :: A --> B rather than f :: A -> B. Note the long arrow since A -> B is syntax for an anonymous function mapping A to B.

Saying that in Haskell “there is no need for dynamic dispatch as there are no dynamic values” seems like a bit of a cop-out, especially combined with the statement that dynamic inputs are actually a thing – introduced by the external world via the IO monad – and that Haskell has to handle them other ways which are “surprisingly involved”. Perhaps a more balanced way to put it would be “the nature of strict static typing in Haskell makes dynamic dispatch impossible, so the language requires you to express dynamic behavior in other often less convenient ways.” I do understand that this is a tradeoff: the inconvenience is the price one pays for the powerful ability to thoroughly check static correctness. Other static languages, however, do allow dynamic behaviors in more convenient ways, while retaining a significant amount of static safety: object-oriented languages allow a modicum of dynamic behavior via virtual dispatch, F# has type providers, and Idris uses dependent typing to handle dynamic inputs more conveniently.

The nature of “static” vs “dynamic” language is not, imo, all that mysterious or hard to define: in static languages expressions have types, in dynamic languages values have types (I’ve elaborated more here). In a static language, type is a compile-time concept – by the time a program executes, types no longer make sense, except as pushed forward from compile-time. In a dynamic language, type is a run-time concept, and if you try to apply it to compile-time entities like expressions, as you would in a static language, you can only be partially successful, and taken to the extreme, one ends up with the unhelpful conclusion that there is only a single type. This is why type tags are ubiquitous in dynamic languages, even if they’re sometimes not materialized, whereas in static languages they’re uncommon, especially in fully static languages (although, in static o.o. languages, dynamic dispatch often necessitates some sort of “tag”, e.g. a vtable pointer).

3 Likes

Thanks for the nuanced reply! I agree wholeheartedly with your thoughtful treatment of Haskell, static typing, and dependent typing. Julia has opened my eyes to the traditional false dichotomy between “static” and “dynamic” typing.

It strikes me that by adding method types, we begin to blur that distinction of expressions not having types in a dynamic language. I suppose we could say that there are only functions at compile time, not methods, and that functions “bind” to methods at runtime. But my understanding is that methods do have meaning at compile time, so an expression involving a single function with type tags on arguments does have the type of the relevant method.

By a similar argument, composing two functions, with complete type tags over the methods and arguments, is also an expression with a type. Sometimes, the compiler could infer this, and in other cases perhaps we allow the programmer to provide a type tag for an expression? This could allow for more static analysis without compromising Julia’s dynamic nature.

I can’t speak towards implementation feasibility. However, I conceive that expression type tagging is a natural implication of method types, that it is consistent with Julia’s theoretical approach, and would provide additional performance and safety that is today only available in “static” languages.

Learned a bit more about this. Multiple dispatch makes type theory tricky :stuck_out_tongue:. Here’s a slight tweak that treats Function as a special case where subtyping is used instead of typing.

Let Function be of type Union{} where each method has a type of the form (A,B,…) → C.

julia> f(x::Number) = x
Union{Number-->Number}
julia> f(x::String) = x
Union{Number-->Number, String-->String}
julia> f{T<:Union{Number,String}}(x::T,n::Number)-->T = x^n
Union{Number --> Number, String --> String,
      (Union{Number,String}, Number)-->Union{Number,String}}

That last example uses a return annotation, and would be improved by traits as the compiler could know that it’s really Union{(Number,Number)-->Number, (String,Number)-->String}.

Another function could say,

function mysort{T}(array::Array{T}, fmap::T-->T=identity,
                   lt::(T,T)-->Boolean)-->Array{T}
    sort(fmap(array), lt=lt)
end

When mysort is called, type checker would verify proper subtyping, fmap <: f, else raise an error before dispatching. Implementation would have to be different than laid out here and use the triangular dispatch mechanism, but hopefully I’ve communicated the idea clearly.

Yes, this is much more accurate. However you’ve left out an entire, rather important class of functions: constructors. Fixing that is not too hard however, we just need to account for the type of the called object also. It’s also can be a bit confusing to call them Function, since it is a somewhat ambiguous and overloaded term. I prefer to refer to this is the set of all methods in the system.

julia> f(x::Number) = x
Tuple{typeof(f), Number} --> Number
julia> f(x::String) = x
Tuple{typeof(f), String} --> String
julia> f{T<:Union{Number, String}}(x::T, n::Number) --> T = x^n
(Tuple{typeof(f), T, Number} --> T) where T <: Union{Number, String}
julia> methods()
Union{every signature in the system}

and would be improved by traits

Not really. Notice that with the formulation used by Julia (translated into your syntax) used for that last one can already be optimally computed and represented in the type system.

type checker would verify proper subtyping

No, this doesn’t actually logically follow. I mean, it sort of does, but it’s best to avoid skipping a mention of the intermediate step where you’ve just provided a solution to the halting problem :stuck_out_tongue:

But still good food for thought. It would be nice to have a way to express that signature <: A implies return <: B. Or for a concrete example, to express that convert(::Type{T}, ::Any)::T, aka that (Tuple{typeof(convert), Type{T}, Any} --> T) where T <: Any.

1 Like

Thanks for taking the time to think this through! Not sure I understand this point–although may just be my inexperience with Julia–as to my eye your type definition is infinitely recursive.

Following the documentation example, we see that constructors are a DataType:

type Foo
  bar
  baz
end

julia> typeof(Foo)
DataType

To your point, Parametric Constructors do resemble functions as they take a DataType as an argument and return a DataType. The type system does not currently make this distinction.

type Point{T}
         x::T
         y::T
       end

julia> typeof(Point)
DataType

In type theory, the kind of Foo and Point to differ; Foo being of kind * and Point of kind * -> *, aka a Higher Kinded Type. Hopefully we’ll see this distinction in a future version of Julia.

Sweet!

I believe the halting problem can be addressed with a type tag.

Let’s say I have

import Base.<

Seconds = Float64

type Experiment{T}
    value::T
    metadata::Dict{String,Any}
end

<(a::Experiment,b::Experiment) = a.value < b.value

# let's make it easy to modify the value while keeping metadata
liftE(f::Function) = x -> Experiment(f(x.value), x.metadata)

double = liftE(x -> x*2)

test = Experiment(2, Dict(
    "date" => DateTime(2017,4,8),
    "stimulus_type" => "light",
    "color" => "white",
    "duration" => 10.0::Seconds))

double_test = double(test)

experiments = [test,double_test]

and I want to do a sort of experiments after adding noise, using my_sort as defined earlier. Then I might do:

noise() = convert(Int64,round(rand()*4))
add_noise y = map(liftE(x -> x + noise()), y)
my_sort(experiments, add_noise)

Here we have the halting problem as you suggest. The compiler does not know the type of add_noise, and cannot verify it matches Array{T} → Array{T}`. But this could be solved with a tag on the method:

function add_noise(y::Array{Experiment{Int64}}) --> Array{Experiment{Int64}}
    map(liftE(x -> x + noise()), y)
end

Edit:: typo in earlier my_sort tag for fmap. It should be Array{T} --> Array{T}.

The compiler does not know the type of add_noise

This is a fairly trivial property to know, it’s just typeof(add_noise) :slight_smile:

cannot verify it matches Array{T} --> Array{T}

That’s partly because it doesn’t quite match that. If you want to use arrow notation, it’s best to write out the type in full before using them in proofs. So it’s better to say that it matches Tuple{typeof(add_noise), Array{T}} --> Array{T}.

The type system does not currently make this distinction

Perhaps splitting hairs here, but the type system (subtyping) does make this distinction, even though the typeof function does not. The actual type of Foo is Type{Foo} <: DataType <: Type.

But this could be solved with a tag on the method

I’m not sure I’m following this example. But it also wouldn’t actually enough to show that there are cases where it doesn’t require solving the halting problem. There’s already trivial examples of that (such as proving that f() = 1 implements Tuple{typeof(f)} --> Int). The challenge is in demonstrating or proving that it doesn’t need to. Another useful corollary of this is to find restrictions that make the problem feasible. For example, our inference algorithm has a number of configuration parameters (https://github.com/JuliaLang/julia/blob/a17b1ae0e03b59430f9573cce36841c445da61c4/base/inference.jl#L5-L35) to control the size of the problem space. I think these parameters can be roughly mapped onto the busy beaver problem (Busy beaver - Wikipedia), which gives a rough idea of the current complexity class of problems that our current inference system will allow you to type check.

If I understand this example correctly, we could turn this into executable code in Julia by writing in this way:

function add_noise(y) # I've dropped the type parameter here because it is not impactful on the answer
    const RT = Array{typeof(y)}
    return_type(add_noise, (typeof(y),)) == RT || error("unable to verify return type will be correct")
    return map(liftE(x -> x + noise()), y)::RT
end

But I think this may reveal a circularity here (aka the halting problem). If this function returns, its return type is RT (because that is type-assert at the end), but if it doesn’t return RT, then the return_type oracle would need to have been able to express that conditional relationship.

1 Like

Think I’m caught up on this point as it’s quite different from the type systems I’ve encountered. Say we have an anonymous function, (x::Int, y::Int) -> 1 + x + y. Surely the type of this is Tuple{Int, Int}-->Int? Then, the method f(x::Int, y::Int) = 1 + x + y would presumably have the same type?

I understand that multiple dispatch associates methods with a function object, but including the function object in the method type would seem to create problems for higher order functions that specify a type like (x::Int, y::Int)-->Int as logically the function object is irrelevant here.

The anonymous function still has an identity, it’s just harder to write since it wasn’t given a name in the source code. So we have to make up some sort of name expression:

{ "typeof((x::Int, y::Int) -> 1 + x + y)", "Int", "Int" } --> "Int"

would seem to create problems for higher order functions

Yep, that’s why it’s important that they be include in the model of system, as otherwise this isn’t actually a discussion about Julia’s type, function, and dispatch systems. :slight_smile:

I suppose one could just write { "Any", "Int", "Int" } --> "Int", which would make the object-oriented programmers happy at the expense of some ugliness for functional programmers. But that’s just an aesthetic concern (assuming that it’s largely the FP crowd using HOF, and that they always would have Any as first in the tuple), so could use syntactic sugar and render it a moot point.

For clarity, my_noise does run fine in julia right now if I hadn’t accidentally slipped in haskell function syntax: add_noise(y) = map(liftE(x -> x + noise()), y)

If I understand what you’re saying, I think this can be solved by running the type rules “backwards” in the spirit of Hindley-Milner. This may sound like a contradiction as Julia is a dynamic language, and indeed it would be if done globally. But I conceive it is necessary if methods are to be first-class values in Julia.

So I wonder if it would be possible for Julia to do method tagging / “local type inference” along the lines of Rust rather than Haskell, where adding a tag to add_noise would run method dispatch in reverse for the purpose of compiling & type checking the method. This would not change type inference anywhere else. And certainly, there would be cases where inference could not solve ambiguous cases, which could be addressed by adding type tags.

If you write “Any” as the first element, that expression now seems to mean “all functions of two Int arguments return an Int”?

This post is a little long so I’ve split it into three parts. I’m not too familiar with any of this, so please correct me if something is wrong.

Method Types are Possible

With regard to the current conversation:
Can you write the type of a method independently of the function it’s a method of? I think the answer is no. A method takes its function identity as its first parameter so it must be in the type.

But that’s not necessarily a problem. These things are supposed to be covariant on the return side and contravariant on the argument side. That is, (A --> B) <: (C --> D) iff A >: C and B <: D. Note that this is also the convention used in functional languages, including Scala and Haskell.

That means that the only sensible interpretation of method::({X} --> T) is:

  • A guarantee that the method applies to signature {X}; that is, if x isa X, then x() should be able to dispatch to this method (I personally think this is a little silly, but see below.)
  • A guarantee that the return value of this method (if it returns at all) is an T

The first guarantee is a little vague, but from the perspective of the method itself, it’s just a subtyping check to see if the argument type is matched.

That means that as @jameson noted, {Any, Int, Int} --> Int is wrong for a method of an anonymous function (because this would imply the method were dispatchable for, e.g., "x"(1, 2), which is false). But this is not really a problem, because {Union{}, Int, Int} --> Int describes these methods fine. The only trouble theoretically is that there are no objects of type Tuple{Union{}, Int, Int}, so we’re relying on the fact that this is not considered just Union{}, but is instead some hypothetical type. It means our definition above is not strictly correct, and we have to add a special case for tuples with Union{} as a parameter. But overall there is no problem here.


Function Types are Technically Infeasible

A more significant problem is function types. These are inherently a lot harder than method types, and just taking the Union of method types does not really work. (It’s not at all what you want; a Union means this object is of one of these types, and it does not make sense for a typeof(f) to be a Union of multiple types.) If

f(x::Int) = 0
f(x::String) = 1

then we note that the function f, in general, satisfies that it can dispatch to f(::Int), and the return type is Int, so we might want f::({typeof(f), Int} --> Int). Furthermore it can dispatch to f(::String), and the return type is String, so we might want f::({typeof(f), String} --> String). This means that the subtype relationships we want to have are:

  • typeof(f) <: ({typeof(f), Int} --> Int)
  • typeof(f) <: ({typeof(f), String} --> String)

in other words, we want

  • typeof(f) <: typeintersect({typeof(f), Int} --> Int, typeof(f), String} --> String)

and furthermore we want typeintersect to return a type that actually is contained by both of the supertypes. As far as I can tell, this is already infeasible. This would require an Intersect{F, G} alongside the currently existing Union{F, G}. The type lattice is not really built to support Intersect{F, G} as a type. This would also make a lot of problems like subtyping very hard or possibly undecidable.

Another possible approach is to use {F, G} (i.e. Tuple{F, G}) to simulate intersection. This also has a lot of problems, as you might imagine, and is not very helpful for the programmer.


Function Arrow Types are Unnecessary

Overall I’m very convinced that true arrow types are not possible, because the behavior of functions is too complex to describe in a type system. In practice, arrow types are useful for two reasons:

  • to allow compiler/user to check the correctness of an argument, or to dispatch on arguments
  • for performance reasons: so that a function pointer can be stored in a type stable way

For the second point we don’t need true arrow types at all. It’s totally fine to have a “arrow type” that is a callable type with a type assertion which wraps around a function pointer and ccalls it; I believe that @yuyichao has worked on this.

For the first point I’m unconvinced of the usefulness of arrow types. I am pretty sure that dispatching on arrow types will never work well, from both a technical standpoint and a user usability standpoint; for the behavior of dispatch to depend on implementation details of inference seems crazy to me. Theoretically, this could be used to provide optimizations when inference can figure something out, but Core.Inference.return_type can already be used to do that in cases where it matters.

Furthermore, I’m not really sure what the point of arrow types is in a language where any function call will compile, and there is no real difference (from the programmer’s perspective) between a MethodError and an ArgumentError. In languages with arrow types, MethodErrors are found at compile time, whereas ArgumentErrors are found at runtime, so it is meaningful to use types to describe that certain things will pass compile.

Function types in Scala and Haskell guarantee that if f::({typeof(f), Int} --> Int), then f(1)::Int will not fail compilation. (There is no guarantee that f(1)::Int will not have, for example, a runtime error.) If we use the same definition in Julia, then every object has this type: there is no object for which f(1)::Int will fail compilation. So if we go by the same standard, then {typeof(f), Int} --> Int is just Any.

The argument is often made that MethodErrors are different than ArgumentErrors, even though they’re both runtime, because the former indicates a likely mistake by the programmer, whereas the latter may be more likely due to exceptional input as opposed to programmer error. But this is really a pointless distinction; if I define f(::Int) = error("don't use this on Ints"), why should all of a sudden f become isa {typeof(f), Int} --> Int? To me this does not make much sense. Sure, f accepts an Int argument and always returns an Int if it returns at all… but this is really the same as f not being defined to accept an Int argument at all.

The issue, really, is that the idea of a function “accepting” a certain signature is not well defined. Should f be considered to “accept” signature S if there’s no method error? What if there is sometimes ambiguities in certain exotic cases?

Overall I’m of the opinion that arrow types are an incomplete way of detecting whether the programmer is likely to have made an error. There is no replacement for good unit tests in catching typos. A linter can help too for missing coverage. But I don’t see type checking functions as being sensible here.

2 Likes

typeintersect to return a type that actually contains both of the subtypes

This is actually not too hard (the unique answer is Bottom / Union{}), but that’s also not very helpful. In Hindley-Milner, I think that is the desired answer (this would then terminate inference with a failure). But the property you’re describing sounds more like Union{F, G}?

But I don’t see type checking functions as being sensible here

I think you’re getting into religious territory there :slight_smile: . But I agree, and think you illustrated the problem well.

But overall there is no problem here.

I think your initial intuition about there being a practical problem here was right. As you noted, the tuple type there is uninhabited. So it seems to give the correct but fairly useless statement “the type of a method that will never be accessed/called may be assumed to be Int”. So it doesn’t have a theoretic problem.

1 Like

Sorry, it’s a typo. What’s needed is for typeof(f) <: typeintersect(A, B) <: A and typeof(f) <: typeintersect(A, B) <: B, which is harder than just returning Bottom :wink:, and probably very difficult in general.

I understand that Function cannot be a “first-class function” in the traditional use of the phrase, but hope that methods can be.

If true, this suggests that methods are not true values either, as f = x::Int -> 2*x and g = x::Int -> 2*x have different types, while a = f(2) and b = g(2) have the same type. Among other things, this would mean that all higher order functions e.g. function composition would be an expression rather than a value, hurting performance and safety :frowning:

Edit: so what I’m pushing for is for methods to have type outside of their function, or at least have some construct in the language that is–perhaps lambdas are a good candidate as multiple dispatch doesn’t make sense there anyway?

Maybe I’m misunderstanding your message—please correct me if so.

But higher order functions like function composition are values, as are all functions. The results of function composition are also values. In fact, the type of closures is well-defined and quite sensible. You can check, for example, that sin ∘ cos === sin ∘ cos. These are the same objects and have the same type.

You’re right that f = x::Int -> 2x and g = x::Int -> 2x have different types, but that’s inevitable, because they have different signatures. But that is just an implementation detail of ->. It’s possible that in a future version they will have the same type.

If you must have x -> 2x and x -> 3x have the same type, an easy implementation is just

multby(n) = x -> n*x

and then multby(2) and multby(3) will have the same type. This kind of optimization will probably not be automatically done in the future because it will require more data to be passed whenever such a function is called, and it prevents some of the optimizations possible by LLVM.

They may be values in the trivial sense, but I don’t believe we can currently write a law of composition on functions in Julia. Most other values in Julia are a Group if not stronger.

Formally, a law of composition on a set S is a function of two variables, S x S -> S.

Let a and b be arbitrary Int64. Then a + b is Int64. e.g. a=1; b=2; c = a + b; typeof(a)==typeof(b); typeof(a)==typeof(c). + is a law of composition on Int64.

Since typeof(sin)==typeof(cos) is False, we cannot satisfy the law of composition with . Even if we use sin as both arguments, we see that typeof(sin)==typeof(sin ∘ sin) is False.

As it currently stands, all other (“first-class”) values are eagerly evaluated and allow for a law of composition, while higher order functions are lazily evaluated and cannot be a law of composition.

That would be a huge step in the right direction!

Not all sets are types. The fact that sin and sin ∘ sin have different types is important for optimization: because they have different signatures, a higher order function taking sin as an argument should generate more optimized and different code than taking sin ∘ sin as an argument. There is really nothing stopping you from defining your own group S with any collection of functions including the identity, and their compositions. This group will not have a type in the type system, but the goal of the type system is not to express all possible groups.

There is no lazy evaluation involved with higher order functions. In fact, there is no difference between a higher order function and a first-order function, and there is no difference between these and arbitrary values — in Julia, both functions and plain values are first-class. Can you explain what you mean here?