On type annotations

Indeed, this is possible. But it violates anything that convert is supposed to do. This code would certainly not be accepted in a pull request. Also, the convert function is not type annotated, so it violates the coding standard proposal we are discussing. With the annotation of the function, Julia realizes that convert does not return the specified type and therefore, it calls convert again.

julia> struct S end

julia> Base.convert(::Type{S}, ::Any)::S = 3

julia> convert(S, 7)
ERROR: StackOverflowError:
Stacktrace:
 [1] convert(::Type{S}, ::Int64) (repeats 79984 times)
   @ Main ./REPL[2]:1

julia> 

1 Like

There is Union for this use case if it is ever encountered.

Obviously, annotating with ::Any is useless and will not be allowed.

1 Like

But if there are things to convert, 2 broad things happen:

  1. convert fails and an error is thrown at runtime, very possibly in the middle of a very long program and throwing out a lot of work. It’s far more preferable if this could be caught in advance, like AOT compilation in statically typed languages. We can do something similar with reflection of call signatures, but it’s not nearly as convenient as whole-program type checks.
  2. convert works and fixes any type mismatches. But is the runtime cost really worth covering up what are likely to be mistakes? For example, assigning to vec::Vector{Float64} will repeatedly convert the value if necessary, but do you really want to allow someone to keep introducing non-Vector{Float64} inputs? It may even make them start treating this like a more weakly typed language
until convert doesn’t work. This is why some suggested putting the annotation on the right side, it doesn’t trigger repeated converts and just checks once if the value is that type to begin with.

When a method is type-stable for a call signature (ideally this is intended over all call signatures, but you can only check per signature), only 1 return type will be inferred, and the call signature can be tested for this. Adding a return type annotation may cover up wrong return types or instabilities with convert, fooling the test.

2 Likes

My understanding is that you have no intention of converting, so my suggestion is to keep all the type assertions to the right hand side rather than the left hand side. Right hand side assertions involve no implicit conversion.

Consider the following implementations of mydiv where my intention is to implement integer division.

mydiv(a::Int, b::Int)::Int = a/b
mydiv2(a::Int, b::Int) = (a/b)::Int
mydiv3(a::Int, b::Int)::Int = aĂ·b
mydiv4(a::Int, b::Int) = (aĂ·b)::Int

I would get the following results.

julia> mydiv1(4,2)
2

julia> mydiv2(4,2)
ERROR: TypeError: in typeassert, expected Int64, got a value of type Float64

julia> mydiv3(4,2)
2

julia> mydiv4(4,2)
2

julia> mydiv1(5,2)
ERROR: InexactError: Int64(2.5)

julia> mydiv2(5,2)
ERROR: TypeError: in typeassert, expected Int64, got a value of type Float64

julia> mydiv3(5,2)
2

julia> mydiv4(5,2)
2

In my opinion, the only correct form to accomplish your goals here is mydiv4 while the similar assertions in mydiv2 allowed us to see the TypeError in that code, even if conversion was possible. The bound type in mydiv allows the code to sometimes succeed even though I used the wrong operator.

Performing type analysis on mydiv1 and mydiv2 produces distinct results.

julia> code_warntype(mydiv1, (Int, Int))
MethodInstance for mydiv1(::Int64, ::Int64)
  from mydiv1(a::Int64, b::Int64) @ Main REPL[43]:1
Arguments
  #self#::Core.Const(mydiv1)
  a::Int64
  b::Int64
Locals
  @_4::Union{Float64, Int64}
Body::Int64
1 ─ %1 = Main.Int::Core.Const(Int64)
│   %2 = (a / b)::Float64
│        (@_4 = %2)
│   %4 = (@_4::Float64 isa %1)::Core.Const(false)
└──      goto #3 if not %4
2 ─      Core.Const(:(goto %9))
3 ┄ %7 = Base.convert(%1, @_4::Float64)::Int64
│        (@_4 = Core.typeassert(%7, %1))
└──      return @_4::Int64


julia> code_warntype(mydiv2, (Int, Int))
MethodInstance for mydiv2(::Int64, ::Int64)
  from mydiv2(a::Int64, b::Int64) @ Main REPL[22]:1
Arguments
  #self#::Core.Const(mydiv2)
  a::Int64
  b::Int64
Body::Union{}
1 ─ %1 = (a / b)::Float64
│        Core.typeassert(%1, Main.Int)
└──      Core.Const(:(return %2))

From the analysis, is it immediately clear that mydiv2 will always result in an error and that my implementation is wrong. The body of mydiv2 returns Union{}.

If you intend no conversion, then invite none. Do not use the form f()::T = return(...) . Use the form f() = return(...)::T.

Let’s compare mydiv3 and mydiv4. The type analysis here is also distinct.

julia> code_warntype(mydiv3, (Int, Int))
MethodInstance for mydiv3(::Int64, ::Int64)
  from mydiv3(a::Int64, b::Int64) @ Main REPL[23]:1
Arguments
  #self#::Core.Const(mydiv3)
  a::Int64
  b::Int64
Locals
  @_4::Int64
Body::Int64
1 ─ %1 = Main.Int::Core.Const(Int64)
│   %2 = (a Ă· b)::Int64
│        (@_4 = %2)
│   %4 = (@_4 isa %1)::Core.Const(true)
└──      goto #3 if not %4
2 ─      goto #4
3 ─      Core.Const(:(Base.convert(%1, @_4)))
└──      Core.Const(:(@_4 = Core.typeassert(%7, %1)))
4 ┄      return @_4


julia> code_warntype(mydiv4, (Int, Int))
MethodInstance for mydiv4(::Int64, ::Int64)
  from mydiv4(a::Int64, b::Int64) @ Main REPL[24]:1
Arguments
  #self#::Core.Const(mydiv4)
  a::Int64
  b::Int64
Body::Int64
1 ─ %1 = (a Ă· b)::Int64
│   %2 = Core.typeassert(%1, Main.Int)::Int64
└──      return %2

mydiv3 lowers to more complicated code than mydiv4. Because we used a left hand side type assertion in mydiv3, Julia needs to consider via type inference what the conversion might do. What’s the point of this when you do not intend to convert in the first place? Just do a simple typeassert as in mydiv4 on the right hand side.

While mydiv3 and mydiv4 do compile down to the same native code, this is only because the convert used in mydiv3 is well implemented.

The problem here is that conversion can get quite complicated. It depends on individual convert methods being implemented properly. If you do want to convert, then I think you should call convert explicitly rather than implicitly. You should also use a type assertion to ensure the conversion returns the expected type.

mydiv5(x::Int, y::Int) = convert(Int, x / y)::Int

I would not recommend this to most people since implicit conversion is probably what most people want to do, which is why Julia allowed it to be implicit in the first place. However, in your case, I worry that unintended implicit conversion could become a big problem if you insist on binding your local variables to types.

6 Likes

How about this as a compromize:

function f(value::Number, obj::MyStruct)::OtherStruct
    vec = [1.0, 2.0, 3.0]  # Vector{Float64} is implied, why mention it?

This prevents the (implied) allowed Any as type for the parameter value, and e.g. allowing passing a String giving a runtime error, and so would all the alternatives:

julia> supertype(Float64)
AbstractFloat

julia> supertype(AbstractFloat)
Real

julia> supertype(Real)
Number

E.g. one good type to use is Unsigned is it applies, if negative numbers aren’t wanted as inputs, but it’s only limited to integers. Rust has a type for > 0 types, disallowing division by zero, and I would like such a Real+ type in Julia
 for measurements. At least as an abstract type


I like code to be generic (to not type functions, or at least then be fully generic).

I agree on 2. except I’m not sure useful for RHS either.

I’m not sure I agree on 1. Why was the syntax for it then added to the language? I still note it doesn’t prevent runtime errors, only type-instability (still adds code for invisible cast):

julia> f(x)::Int64 = x
f (generic function with 1 method)

julia> f(3)
3

julia> f(3.0)
3

julia> typeof(ans)
Int64

julia> f(3.5)
ERROR: InexactError: Int64(3.5)

To expand a bit on what you will be missing out on if you declare your argument types to be Float64:

ForwardDiff.jl uses Dual numbers to compute derivatives. If you restrict your argument type to Float64 it will throw an exception:

julia> supertype(ForwardDiff.Dual)
Real

julia> f(x::Vector{Float64}) = sin.(x)
f (generic function with 1 method)

julia> f([1.0,2.0,3.0])
3-element Vector{Float64}:
 0.8414709848078965
 0.9092974268256817
 0.1411200080598672

julia> ForwardDiff.jacobian(f,[1.0,2.0,3.0])
ERROR: MethodError: no method matching f(::Vector{ForwardDiff.Dual{ForwardDiff.Tag{typeof(f), Float64}, Float64, 3}})
The function `f` exists, but no method is defined for this combination of argument types.

If you want to use automatic differentiation then you should type your arguments at least Real.

julia> f2(x::Vector{T}) where{T<:Real}= sin.(x)
f2 (generic function with 2 methods)

julia> ForwardDiff.jacobian(f2,[1.0,2.0,3.0])
3×3 Matrix{Float64}:
  0.540302   0.0        0.0
 -0.0       -0.416147  -0.0
 -0.0       -0.0       -0.989992

If you want to use Unitful arguments (meters, seconds, kilograms, etc.) then ‘Real’ is too restrictive

using Unitful,Unitful.DefaultSymbols

julia> supertype(typeof(1.0m))
Unitful.AbstractQuantity{Float64, 𝐋, Unitful.FreeUnits{(m,), 𝐋, nothing}}

julia> supertype(ans)
Number

In this case you have to use the Number type:

julia> f3(x::Vector{T}) where{T<:Number} = x .* x
f3 (generic function with 1 method)

julia> f3([1.0m,2.0m,3.0m])
3-element Vector{Quantity{Float64, 𝐋^2, Unitful.FreeUnits{(m^2,), 𝐋^2, nothing}}}:
 1.0 m^2
 4.0 m^2
 9.0 m^2

If you use interval arithmetic (IntervalArithmetic.jl) then you need to make the argument types at least Real:

julia> using IntervalArithmetic

julia> supertype(Interval)

Real

julia> x = ([interval(0,3),interval(0,Inf)])
2-element Vector{Interval{Float64}}:
 [0.0, 3.0]_com
 [0.0, ∞)_dac

julia> f(x)
ERROR: MethodError: no method matching f(::Vector{Interval{Float64}})
The function `f` exists, but no method is defined for this combination of argument types.

julia> f2(x)
2-element Vector{Interval{Float64}}:
 [-0.841472, -0.84147]_com
 [-1.0, 1.0]_dac

julia> f3(x)
2-element Vector{Interval{Float64}}:
 [1.0, 1.0]_com
 [0.0, ∞)_dac

You may want to do error analysis using a package like ‘Measurement.jl’:

julia> using Measurement

julia> supertype(Measurement)
AbstractFloat

julia> h(x) = x^2
h (generic function with 1 method)

julia> ForwardDiff.derivative(h,1.0 ± .01)
2.0 ± 0.02

In this case you need to make the argument type at least `AbstractFloat’. Also notice that this example mixes the use of two special number types, ‘Measurement’ and ‘Dual’. Magically they work together seamlessly.

You can easily plot measurement values with error bars without any effort. Just pass Measurement type numbers to plot:

plot([1.0±.1,2.0±.3,3.0±.25])

which gives you this:

There are many more such special number types that provide, for me at least, incredibly useful functionality.

It is not practical to use a Union type to include all of them because A) you don’t know what they all are nor do you know which of them you might want to use in the future, and B) more of them are being created all the time.

Imagine you’ve decided on the set of special numbers you want to allow as arguments:

julia> f(x::Union{Measurement,Dual,Interval,BigFloat,Float64,Float32})

Now imagine you decide you want to do symbolic analysis on your functions using Symbolics.jl. You have to change every declaration of every function to add Num, which is the type of variables in Symbolics.

Whereas if you had declared your argument types Number you could just do the following:

julia> using Symbolics

julia> @variables z
1-element Vector{Num}:
 z

julia> typeof(z)
Num

julia> f3([z,z])
2-element Vector{Num}:
 z^2
 z^2

If you are certain you will never want to use any of these special number types then what you are suggesting could be reasonable. But you will be giving up an incredible amount of power and flexibility.

6 Likes

This is undesirable. It is better to use Real or Number instead.

@Matthijs_1971 take a look at @brianguenter post to see why.

I have no comments on type annotations (they’ve already been extensively covered in this discussion), but I do have some comments on the annotation of function arguments.

It’s not 100% clear from what you’ve said so far, but it sounds like you want all function arguments to be annotated with a concrete type, or a union of concrete types. The core purpose of argument annotation is to control method dispatch. A perhaps underappreciated way of looking at argument annotations is that they define the interfaces that the input objects are expected to implement. For example, I might want to write the following function foo that expects its argument to implement the AbstractVector interface and have an element type <:Number:

function foo(x::AbstractVector{<:Number})
    if length(x) < 4
        0
    else
        x[2] + x[4]
    end
end

(Let’s forget for the moment about offset arrays. :grimacing:) This is a correct argument annotation, because the only interface required of the argument x is the AbstractVector{<:Number} interface.

Granted, the system is not perfect, because we do not have abstract multiple inheritance (issue #5). Some interfaces do not have an associated abstract type, like the iteration interface. So if you want to write a function that works on any iterator, you must leave the argument annotation as ::Any, like in this myfirst example:

function myfirst(itr)
    out = iterate(itr)
    isnothing(out) && error()
    out[1]
end

One idea that might shed light on the situation is to distinguish between two categories of methods:

  • Interface methods
    • Methods that must be implemented by a type in order to implement an interface.
    • These methods often have concretely typed arguments.
  • Generic functions
    • a.k.a ad-hoc polymorphism / duck-typing
    • These methods usually have abstractly typed arguments that describe the expected interface implemented by their arguments.

Here’s a toy example of this dichotomy:

# Interface: Foo
# Interface methods:
#      bar(x::T)
#      qux(x::T)
#    where T is a type implementing the Foo interface.
abstract type Foo end

# These structs implement the Foo interface:
struct A <: Foo end
struct B <: Foo end
bar(::A) = 1
qux(::A) = 2
bar(::B) = 3
qux(::B) = 4

# Generic functions `asdf(::Foo)` and `qwer(::Foo)`:
asdf(x::Foo) = bar(x) - qux(x)
qwer(x::Foo) = bar(x) / qux(x)

In this example, bar and qux are interface methods—they dispatch on the concrete types A and B. On the other hand, asdf and qwer are generic functions. They are defined for any type that implements the Foo interface. So, it is natural to use concretely-typed argument annotations for interface methods and natural to use abstractly-typed argument annotations for generic functions.

Taking a step back, it feels like you are swimming against the current of the language. Prohibiting polymorphism seems like a step back in time. If you want to encourage a rich ecosystem of composable packages at your company, I would encourage you to allow generic Julia code where it is appropriate to do so.

One final thing. This is how I would write your example function:

"""
    f(value::Real, obj::MyStruct)

Use `f(value, obj)` to calculate blah blah with the answer
returned as an `OtherStruct`.
"""
function f(value::Real, obj::MyStruct)
    vec = [1.0, 2.0, 3.0]
    calculate_other_struct(value, obj, vec)
end

In my opinion, docstrings (and unit tests) are a better way of documenting the expected behavior of a function than annotating the return type of a function. Also note that static analysis tools like JET.jl do not require return type annotations to work properly.

5 Likes

Probably already mentioned but worth pointing out that “variable will not change type as long as it lives” really only works like you want for concrete type annotations. With abstract annotations, the variable itself is designated that abstract type, but the values assigned to it can be an infinite number of concrete subtypes.

People already mentioned how you’re losing genericity by annotating concrete types in arguments, but it’s not all about exotic types. It’s also about letting a method work on various mundane types. If you have the same algorithm to work on Float32, Float64, Int16, Int32, Int64, really all the numbers, you can and should make 1 method and let the callees handle the more concrete type-specific things. If you’re duplicating methods or doing some @eval loops over types just to do concrete type annotations, you’re sacrificing maintainability.

Bit of a tangent, but one less-than-ideal reason that composite types (classes) proliferate in OOP is to leverage single dispatch polymorphism (1 class determines which method) especially on runtime types in statically typed languages, even when the methods all do the same thing, often just extracting the specific fields before using a common callee e.g. def foo(self): return _foo(self.a, self.c). There’s no need for that indirection with multiple dispatch, so composite types (structs in Julia) can serve more important purposes.

2 Likes

One thing that’s also really delightful when using this pattern is that you can do

bar(f::Foo) = error("The bar method is not defined for $(typeof(f))")
qux(f::Foo) = error("The qux method is not defined for $(typeof(f))")

To help remind any users (or your later self) if you try to make a new type that’s supposed to have that interface. Since functions will dispatch to the most specific method that’s defined, having fallback methods that throw informative errors sort of acts like extra documentation of the interface.

1 Like

By default, it will already throw:

ERROR: MethodError: no method matching bar(::SOMETYPE)

Why is your error message better?

(And by defining such a fallback, other functions can no longer use hasmethod to do introspection on whether a bar method exists for a given type.)

4 Likes

Because after the annotation, the type of vec can not change any more. That improves maintainability and readability of the code.

People are saying it still only works if you always assign instances of the same type to that variable. And when you do, commenting # ::Vector{Float64} is just as informative without introducing the possibility of implicit converts covering up assignments or returns of unintended types like [1]. Fooling return type inference tests is not a maintainable situation at all.

This might not work the expected way

y::Union{Float64, Float32} = Float32(1)
y =  2*pi*y

y will change type and end up as a Float64.

It is one of the most common patterns in idiomatic Julia. Is it really conceivable that it won’t?

4 Likes

The use case that you are referring to, was about preventing double code due to type annotation in function arguments, not in variable assignments. Something like this:

#!/usr/bin/env julia

import StaticArrays

function f(x::Union{Vector{Float64}, StaticArrays.SVector{3, Float64}})::Float64
    return x[1]
end

v::Vector{Float64} = [ 1.0, 2.0, 3.0]
println(f(v))

s::StaticArrays.SVector{3, Float64} = StaticArrays.SVector{3, Float64}(1.0, 2.0, 3.0)
println(f(s))

Yes, this discussion is drifting out of control with all this focus on numerical types and too little on the myriad of structs we have defined in our company.

But didn’t you want assignment assertions? How would that work together with union-typed input arguments?

I’m not sure I see the difference.

1 Like

The difference is that structs defined in our company won’t be supported by third party packages mentioned earlier in the discussion.

Yes, as demonstrated in the example. What is your question precisely? If a code needs to be re-usable for different argument types and the arguments need to be annotated, unions are the solution. The developer will see immediately for which types the function should work by the annotations.