Feature request: type assertion for compiler-known type


Suppose I have an assignment statement like this:

    a = Complex{typeof(b)}(b+1.)

Suppose furthermore that I think that the compiler ought to be able to deduce the exact type of a (for type stability). But how can I confirm this? A type-assertion doesn’t really help:

  a = Complex{typeof(b)}(b + 1.)::Complex{typeof(b)}

since the type-assertion type is also not necessarily known to the compiler.

One way is to pore over the output of code_warntype. But instead, I would like to ask for a new language feature:

  a = Complex{typeof(b)}(b+1.)::CompilerKnowsType

that will yield an error at compilation time if the compiler does not know the type.




EDIT: Regarding my previous response below: I pushed the @inferred to the right of the assignment statement in the code below like this:

    b = @inferred Complex{typeof(a)}(a + 1.0)

and then it seemed to work. So apparently this solves my problem except for one question: is there a run-time performance hit for using @inferred, or does the compiler turn it into a no-op in the compiled code?

I was not previously familiar with @inferred. I just looked at the doc, and I’m not quite sure how it solves the problem . Do I precede the assignment statement with @inferred? That didn’t work:

module test_inferred
import Base.Test.@inferred
function t(a)
    @inferred b = Complex{typeof(a)}(a + 1.0)
    println("b = ", b)

yields (0.6.2):

julia> include("test_inferred.jl")
ERROR: LoadError: @inferred requires a call expression
 [1] include_from_node1(::String) at .\loading.jl:576
 [2] include(::String) at .\sysimg.jl:14
while loading C:\Users\vavasis\ownCloud\Documents\Katerina\cohesive\conic_jl\test_inferred.jl, in expression starting on line 4


You want b = @inferred Complex{typeof(a)}(a + 1.0). Note that @inferred is typically used in tests, not in production code; I’m not sure if it has any overhead; I would suggest benchmarking.


There is a big performance hit in 0.6.2 with using @inferred in this manner. The running time of invoking t repeatedly jumped from .000125 sec to 0.913 sec according to the benchmark I just ran.


I’m actually rather hazy on this but I think that in some cases in 0.7 constant propagation might help with this sort of thing. For example, I think

f(a::Bool) = a ? 1 : 1.0

while not formally “type stable” would behave as such in 0.7, i.e. be equivalent to

f(::Type{Val{true}}) = 1
f(::Type{Val{false}}) = 1.0

though I haven’t rigorously checked this. It might be even more complicated than that and depend on how the input a is generated in the containing code.

I’d actually be very interested to know if someone can come along and tell me whether that is indeed the case, because I really don’t know what I’m talking about.

By the way, one (perhaps obvious) piece of advice regarding your original question is that you should avoid using typeof in favor of using parametric inputs wherever possible. The only cases I commonly run into where this routinely back-fires all have to do with IO and “generic” containers such as DataFrames.

const DEBUG = false

@static if DEBUG
    @eval begin
        macro debug_inferred(ex)
            return :(Base.Test.@inferred $(esc(ex)))
     @eval begin
        macro debug_inferred(ex)
            return esc(ex)

When DEBUG is false:

julia> f(x) = x > 1 ? x : [x]
f (generic function with 1 method)

julia> x = @debug_inferred f(3.)

julia> @macroexpand @debug_inferred f(3.)

When true:

julia> x = @debug_inferred f(3.)
ERROR: return type Float64 does not match inferred return type Union{Array{Float64,1}, Float64}
 [1] error(::String) at .\error.jl:21

Note: modified from a similar construct in JuAFEM.jl


Could somebody expand on the following piece of advice? In my sample code in the original post, the use of typeof did indeed lead the compiler to correct inference (at least according to the result of @inferred), so what is the danger/disadvantage of using typeof in this manner?


Thanks for posting the macro @debug_inferred. This does not completely solve my problem for the following reasons.

  1. I have to remember to turn DEBUG on and off.
  2. My code can run for hours, and some of the @inferred declarations might not be encountered during the first few minutes of the run. (So in other words, I have to turn @inferred on and off at a fine-grained level.)

I don’t know how @inferred works, but there is no obvious reason why it should have ANY run-time overhead. In principle, the compiler can check it and then discard it.


Regarding the two functions modulus1 and modulus2, I’m still not getting the point. Is there any difference between their functionality/performance? Or are you preferring modulus2 because it is more readable?

In my case, my code already has a plethora of where declarations, and only a few of these typeof invocations, so the readability tradeoff is not clear-cut.


Sorry, my example was too silly so any inefficiency there gets elided by the compiler.

In general though, there may be a difference between typeof and using parameters. For example, using a Vector{Any} and taking typeof its elements is typically much less efficient than having a Vector{T} and using T. Again, I don’t know the context your code is appearing in, but I usually find that you don’t need to resort to the type of thing you’re talking about.


IIUC, it is checking the actual (run-time) type of the expression, against what was inferred. If what was inferred is a concrete type, it probably has very little overhead, a single comparison of the two types, and a branch generally not taken.
However, if the inferred type is not concrete, then I imagine it can make things a lot slower, because it would have to check if the type of the expression is any of the possible types that are <: the inferred type.


Taking the type of the elements is quite different from taking the type of the elements of the collection.
eltype(vec) will give you the same as foo(::Vector{T}) where {T} = T, and seems to generate exactly the same code.
If you have a Vector{Any} or a vector with some other non-concrete type (excepting the small unions which are now optimized on master), every element will be a pointer to a boxed object with the type, so getting the type of each element means following that pointer on each element.


I’m aware, I was just having a really hard time coming up with an example that gets at the gist of what I was saying. Was watching this while doing something else, I really shouldn’t have jumped in today, sorry.


The Style Guide suggests the opposite. From the section Don’t use unnecessary static parameters :

Even if T is used, it can be replaced with typeof(x) if convenient. There is no performance difference.


In my experience, with 0.6.2 there have been cases where replacing typeof(x) with a function type parameter resulted in better inference results. I don’t have an example ready though.


Is this posting referring to the use of the @inferred macro? My benchmark indicates that even if the inferred type is concrete, the overhead for using the @inferred macro is huge, which I don’t quite understand.


That’s not what they are talking about. What they are saying is that you should not do e.g.

f(x::T) where {T<:Real} = x + 1

but that’s a different use case from those in which you actually want to be able to utilize the type parameter e.g.

f(x::T) where {T<:Real} = x + one(T)

In that case you need to have access to the type T, so it makes sense to include it in the function signature. As @tkoolen indicated, it’s usually safer to use type parameters rather than typeof. Granted, as the compiler gets better, there will be more and more cases in which there won’t actually be a difference in the compiled machine code (again, constant propagation in 0.7 potentially makes it much more difficult to know what exactly will happen). Still, I suspect using type parameters in cases where types need to be provided as arguments will always be the recommended approach.



Maybe there are cases where it makes a difference (in terms of speed, or simplicity of code).

I’m just pointing out what the style guide suggests, and that for simple cases, the function can be re-written without static type parameters and there is no performance difference. I personally do not have a preference.

Your example can be re-written, without loss of performance or generality:

f(x::T) where {T<:Real} = x + one(T)
g(x::Real) = x + one(typeof(x))

@benchmark confirms the functions have similar performance.
@code_warntype shows the functions are equivalent.