Return type not matching input type

Today I tried something like this:

data = rand(UInt8, 100, 100)
data_mod = mod.(data, 5)

I noticed that my input type is UInt8 and the return type is Int64.

Then I noticed this:

function a()::UInt8
function a()::UInt16

a (generic function with 1 method)

and verified here:


# 1 method for generic function a:
a() in Main at In[68]:4

I don’t understand why the input type determines the implementation that’s used (for multiple dispatch) but then the output type doesn’t seem to matter. This seems like it could cause problems when chaining function outputs to inputs (ie a(b()) ). It also seems like this will cause a lot of needless calls to convert().

This seems so obvious it can’t be a bug, this has to be an intentional choice… but why??

What’s happening here is that 5 is an Int64, and mod(UInt8(1), -4) == -3, so the output type of mod(::Unsigned,::Signed) has to be a signed type to preserve type stability.


In your example, when you’d write a() in your code, how is julia supposed to know which return type you really want? The ::UInt16 type annotation only leads to a convert call at the end of the function inside the annotated function. Since the input types are the same, julia treats the second a() as a replacement for the first one, because it can’t in the future decide which version you want, as the return type is not part of dispatch (the vast majority of julia code has quite complex inferred return types, which would lead to a lot of noise when you’d have to write them in calling code as well…)


It could error for ambiguity, or it could infer which one you mean based on the context, eg x::UInt8; x = a() could pick the UInt8 one.

The former would be very annoying, considering the amount of untyped local variables in regular code and the latter would be very restrictive, since you’re effectively limiting what you accept from called code - not very extendable with new meanings of a. I don’t think this would be worth it.


An alternative to achieve that might be making the intended output a Ref input, i.e.:

julia> a!(x::Ref{UInt8}) = (x[] = 0x08)
a! (generic function with 1 method)

julia> a!(x::Ref{UInt16}) = (x[] = 0x0010)
a! (generic function with 2 methods)

julia> x = Ref(0x00)

julia> a!(x)

julia> x[]

julia> y = Ref(0x0000)

julia> a!(y)

julia> y[]

(EDIT: There was a silly mistake in my first code: I wrote 0x0016 where 0x0010 was intended, but it does not change the behavior)

1 Like

More idiomatic, accept a type argument:

a(::Type{UInt8}) = 0x08

julia> a(UInt8)

Dispatch on returned type would be a pretty intense change to julia’s semantics.
(But pretty cool)
And introducing a new error for that particular case would be breaking.

Always worth a read of “Julia is not a that stage of development anymore.”

The bottom line is that the general feel of how Julia works is done. Finito. Finished. Julia

Maybe that is not so weird that it is out of consideration for Julia 2.0.
Or maybe it is.

In the first case, for sought result could have written: data_mod = mod.(data, UInt8(5))

Just for fun, I have written this module with a couple of macros to make as if dispatching on the type of the assigned variable, as the OP was looking for:

module DispatchOnAssignType

struct ReturnType{T} end

istyped(expr::Expr) = (expr.head == :(::))
istyped(::Any) = false

macro doat_function(expr)
    if expr.head == :function
        fdef = first(expr.args)
        if istyped(fdef)
            T = fdef.args[2]
            rt = :(::DispatchOnAssignType.ReturnType{$T})
            fdef = fdef.args[1]
            insert!(fdef.args, 2, rt)
    return esc(expr)

macro doat(expr)
    if expr.head == :(=)
        lhs, rhs = expr.args
        if rhs.head == :call
            if istyped(lhs)
                T = lhs.args[2]
                rt = :(DispatchOnAssignType.ReturnType{$T}())
                rt = :(DispatchOnAssignType.ReturnType{typeof($lhs)}())
            insert!(rhs.args, 2, rt)
            expr.args[2] = rhs
    return esc(expr)

end # module

This allows funny things like this:

DOAT = DispatchOnAssignType

DOAT.@doat_function function a()::UInt8

DOAT.@doat_function function a()::UInt16
julia> methods(a)
# 2 methods for generic function "a":
[1] a(::Main.DispatchOnAssignType.ReturnType{UInt8}) in Main at REPL [2]:1
[2] a(::Main.DispatchOnAssignType.ReturnType{UInt16}) in Main at REPL [3]:1

julia> x = 0x00

julia> y = 0x0000

julia> DOAT.@doat x = a()

julia> DOAT.@doat y = a()

julia> x

julia> y

and this too:

function foo()
    DOAT.@doat x::UInt8 = a()
    DOAT.@doat y::UInt16 = a()
    return x, y
julia> foo()
(0x08, 0x0010)

To boil this down to the simplest possible example, what should this do?:

julia> a()

It’s in the REPL, so there’s no context to infer what the return type should be—it could evaluate to literally anything. If you have a type like this and you want it to influence the behavior of a you can explicitly pass the type as an argument:

a(::Type{UInt8}) = 8
a(::Type{UInt16}) = 16

Very early on we considered ways to let the context of an expression—in this case, its inferred type—influence its evaluation. But there’s just no way to make that work in a language like Julia that doesn’t have any notion of typed contexts. Instead we decided that the very simple and easy to understand rule is this: the context in which an expression is evaluated does not ever affect its evaluation—you evaluate expressions from inside to out, period. Calling the same function with the same arguments does the same thing regardless of where it occurs. That’s a limitation, yes, but it makes Julia code much easier to understand.

Even though dispatch on return type sounds kind of cool, I think it would make it very hard to understand code and would introduce subtle and confusing spooky action at a distance problems. For example, you would change the type of a field in some data structure and suddenly some code somewhere far away that assigns to that field changes its behavior and calls a different method. Or what if you refactor some code so that you split x.f = a() into t = a(); x.f = t. Are those guaranteed to do the same thing? (Right now they are.) If they are guaranteed to do the same thing and the type of x.f affects the dispatch of a(), then that implies that the compiler has to do backward data flow type analysis because it has to see that t is going to be assigned to x.f since the type of that field is supposed to affect which method a gets called. How far back does it have to do that analysis? What if the assignment is factored into another function? This just opens up a really gnarly can of worms.


From what I remember, Haskell does this in some contexts. But then Haskell is statically typed.

That’s somewhat of an understatement. Yes, the meaning of many things depends on the context in which it is used in Haskell—in fact, that’s pretty much the core nature of Haskell’s type unification algorithm. For example, if you write x + 1 in a Haskell program, the type of 1 must match the type of x. That allows deciding the type that the literal 1 should have based on the type of x. And yes, that means that you cannot do mixed type arithmetic in Haskell: if you want to add an integer and a float, you have to explicitly convert the integer to float first.

This point does suggest another objection to having the return type affect dispatch. What should a() + 1 do? Given that there are + methods for all manner of combinations of argument types. There’s a naive argument that it should call a() “with a return type of Int” since 1 has type Int, but there are many other methods of +, so why is the one with matching argument types special? Haskell answers this question by rejecting this kind of polymorhism entirely: while + can do different things with different kinds of arguments, the types of the arguments must match or type inference fails (and unlike Julia, type inference must always succeed).