Obtaining a function's output type



Given a function and some argument types, it should be possible to know the return type without actually running the executing the method. How can I do that? Is there something like a functiontype(f,argtypes)?

My goal is to be able to promote the type, given multiple functions and arguments as tuples, e.g. as

promote_type(functiontype.((cis,cos),(Float32,Float64))...) = Complex{Float64}



In 0.6, you might want

julia> Base.return_types(cos, Tuple{Float64})
1-element Array{Any,1}:

edit: Note that this is afaik intended for use in @generated functions, and not terribly fast for runtime use.


No it cannot be used in generated functions. It’s also not intended for use in what the OP wants. It’s there only for empty comprehensions.


What would be the recommended way of getting the type of a function’s output then?

I am trying to allocate an array, and since values from those functions will get added up in there, I need to allocate the correct type to begin with.


If you can, use something like map or a generator/comprehension or broadcast (or a dot call), since those functions will take care of this for you.

The way map works, if I recall, is:

  1. If the collection is non-empty, evaluate the first element x, and speculatively allocate an array of T = typeof(x).

  2. Loop over the remaining elements. As each element y is computed, if it is of type T, then store it in the existing array. Otherwise, set S = typejoin(T, typeof(y)), allocate a new array of type S, and copy the old elements along with y to this new array. Discard the old array and continue with T = S.

  3. If the collection was empty (so that there is no first element), return an empty array whose type is computed by Core.Inference.return_type

Basically, this is optimized for type-stable, inferrable functions (the only case that will generally be fast), but still works for type-unstable functions.


Hah, we have the same thread going on over in #dev: Promote_op and preallocating result of linear operators


I’m confused as to why this is complicated: isn’t the return typed determined by inference at compile time? Assuming that the function is type-stable of course…


The reason is that inference isn’t part of the language, it’s part of the implementation. Consequently, as the implementation improves, it may also improve inference, changing the results.

For example, suppose that you have a method which returns Union{Int64, Nothing}, but then version 1.1 of Julia adds a new compiler pass which is able to determine that nothing is never returned. This would change the output of return_types to be just Int64. Or suppose you have a Julia interpreter mode, which simply assumes that return_types is always Any (disclaimer: I have no idea if this is what the interpreter actually does).


That’s precisely the rub. Not only could inference be disabled, but collecting things like:

f() = rand() > 1 ? 1 : 1.0
[f() for _ in 1:3]

used to return an Vector{Any} (a long long time ago). We’ve made inference better, so now it’d be a Vector{Union{...}} if we still relied on inference, but that’s precisely the problem — ideally inference improvements wouldn’t change behaviors. At the end of the day, it’s an optimization.