Should Julia raise errors for illegal return type annotation?

It’s strange that Julia gives you errors for illegal type annotations for function arguments, but not return values (until runtime).

julia> f(x)::1 = x # illegal return type, no error
f (generic function with 1 method)

julia> f(2) # error at runtime
ERROR: MethodError: First argument to `convert` must be a Type, got 1
Stacktrace:
 [1] f(x::Int64)
   @ Main ./REPL[1]:1
 [2] top-level scope
   @ REPL[2]:1

julia> f(x::1) = x # illegal argument type, error during parsing
ERROR: ArgumentError: invalid type for argument x in method definition for f at REPL[3]:1
Stacktrace:
 [1] top-level scope
   @ REPL[3]:1

The example above looks contrived, but a real example is that people coming from C may mistakenly write Int[] instead of Vector{Int}. Now the return type is annotated with a zero-length array instead of its type!

Does this point to an obvious potential improvement to catch more errors before runtime?

1 Like

It’s a little tricky because the assertion can be arbitrary Julia code. For example:

julia> g(x) = x == 0 ? Int : Float64
g (generic function with 1 method)

julia> f(x)::g(x) = x
f (generic function with 1 method)

julia> f(1)
1.0

julia> f(0)
0

In contrast to method signatures (which need to know their argument types at definition time), return type assertions are literally just plopped into the function body and evaluated at runtime. See how it calls g(x) at runtime, then converts and asserts to whatever it returned:

julia> @code_lowered f(1)
CodeInfo(
1 ─ %1 = Main.g(x)
│   %2 = Base.convert(%1, x)
│   %3 = Core.typeassert(%2, %1)
└──      return %3
)
17 Likes

Today I Learned.

Is this in the docs?

I thought it was only permitted to be a type that was then inserted into a convert call.

8 Likes

Wow. Any use case for something like this?

And regarding this:

Does it mean that type assertions always come at a cost? There is no possibility that compiler optimizes them away?

It would be nice that this was discussed in the docs.

The compiler optimizes many things away.
Including basically anything that is only a function of the types that are involved.
But also several more things, like the conversion of things to their own type, since all code optimizes away because of knowing the types.
and a bunch more when it can constant fold.

It would be nice that this was discussed in the docs.

The docs intentionally are a bit sparse on things that the optimizer does or does not do.
I think they shouldn’t be, but I guess the argument goes that the optimizer is an implementation detail and is not covered by SemVer.
Still I think we should have a page on “What can you expect the optimizer to do”, that answers questions like this; that has some disclaimers.
And instructions on how to check.

For now, instructions on how to check.
Look at @code_typed and @code_llvm to check what is actually being run.
Or Cthulu.jl

type assertions

Remember that return type annotations are not type assertions.

They are like typing annotating a variable, which is to say they trigger convert on assignment.


julia> @code_lowered (() -> x::Int = 10)()
CodeInfo(
1 ─ %1 = Base.convert(Main.Int, 10)
│        x = Core.typeassert(%1, Main.Int)
└──      return 10
)

not like annotating a value, which checks it’s type:

julia> @code_lowered (() -> x = 10::Int)()
CodeInfo(
1 ─ %1 = Core.typeassert(10, Main.Int)
│        x = %1
└──      return %1
)

Note though for interest both of these optimize away:

julia> @code_typed (() -> x::Int = 10)()
CodeInfo(
1 ─     return 10
) => Int64

julia> @code_typed (() -> x = 10::Int)()
CodeInfo(
1 ─     return 10
) => Int64
4 Likes

OTOH, I can’t think of one that would be helpful. That g(x) has to be type stable in order for f(x) to be inferrable (all those Union are red, which discourse sadly doesn’t show):

julia> @code_warntype f(0)                             
MethodInstance for f(::Int64)                          
  from f(x) in Main at REPL[2]:1                       
Arguments                                              
  #self#::Core.Const(f)                                
  x::Int64                                             
Body::Union{Float64, Int64}                            
1 ─ %1 = Main.g(x)::Union{Type{Float64}, Type{Int64}}  
│   %2 = Base.convert(%1, x)::Union{Float64, Int64}    
│   %3 = Core.typeassert(%2, %1)::Union{Float64, Int64}
└──      return %3                                     

Having non-trivial g for calculating a return type that could probably be inferred in f alone already doesn’t seem that helpful. Presumably, whatever operation you want to do with x would be either type stable (in which case the assertion is useless anyway since it was inferred from the start) or you wouldn’t want a type unstable calculation (because it doesn’t help you write more type stable & performant code).

So type assertions are mostly helpful with type unstable functions where the compiler fails to infer, but you (by some inspiration) already know the return type anyway and can hardcode it.

You can encapsulate complex type calculation logic in g.

f(x)::g(x)

is just a one liner for

T = g(x) # type stable, compiler-friendly, etc
# ... some lines later
f(x)::T

which is a very common pattern.

1 Like