Is `Core.Compiler.return_type` foldable?

Let’s say I want to introduce the following utility function in my library so I have a way of type stable get!ing on generic dictionaries:

function stable_get!(f::F, dict, key) where {F}
    return get!(f, dict, key)::(_return_type(f))
end
Base.@assume_effects :foldable function _return_type(f::F) where {F}
    return Core.Compiler.return_type(f, Tuple{})
end

This is to address instances where I have some generic cache:

cache = IdDict{Any,Any}()
get!(cache, unique_key) do
    #= my type stable calculation =#
end

I don’t want to lose type stability because I am memoizing the result. At the same time, I know that a cached result will have the same type as the return of the function – so I want to help the compiler by telling it the return type.

Now, my question: is anything about this pattern dangerous at a language level? For example, is the :foldable assumption about Core.Compiler.return_type wrong in some way?


In case using Core.Compiler.return_type directly is bad, we could also reproduce it using the Julia public API:

my_return_type(f::F) where {F} = eltype([f() for _ in 1:0])

Though from my understanding this should be equivalent.

You shouldn’t need to mark this as :foldable since Core.Compiler.return_type already constant folds it on its own:

julia> code_typed(()) do
           Core.Compiler.return_type(sin, Tuple{Int})
       end
1-element Vector{Any}:
 CodeInfo(
1 ─     return Float64
) => Type{Float64}
2 Likes

Absolutely, as documented in the @assume_effects doc string. To be specific, foldable requires nortcall.

2 Likes

More generally, it doesn’t make sense to use @assume_effects in this case: if the compiler developers wanted to provide foldable here, they would have. In fact, as shown above by @Mason, it already is foldable when one’s lucky.

It only makes sense to use @assume_effects in limited cases. Some examples:

  • @assume_effects :terminates_locally for a loop that’s known to always terminate, for any permissible argument types. The compiler currently doesn’t at all attempt to prove loop termination, even the simplest loops are treated pessimistically.

  • for ccalls

  • Recursion can impede the effect analysis a great deal, so it might be necessary to add stronger effects in case of recursion. Of course, this is only valid in limited cases, such as if the only thing the recursion is doing is splatting or concatenating Tuple values.

4 Likes

I’m actually not aware of any cases where it doesn’t fold if f is known. return_type is a tfunc.

Thanks, both. Super useful!

Regarding this:

Are there instances where this is not true?

Very interesting. What are some effects I might consider exploring?

1 Like

In your case there are no arguments, but the callable (f) is still variable, and, I suppose, user-provided. As hinted at by Mason, when the type of f is not known exactly, return_type may fail to constant fold:

julia> struct F{X} <: Function
           x::X
       end

julia> function (f::F)(x = 7)
           x + f.x
       end

julia> code_typed((f -> Core.Compiler.return_type(f, Tuple{})), Tuple{F{Float32}})  # the run time call folds fine
1-element Vector{Any}:
 CodeInfo(
1 ─     return Float32
) => Type{Float32}

julia> code_typed((f -> Core.Compiler.return_type(f, Tuple{})), Tuple{F{<:Union{Float32, Float64}}}) # does not constant fold
1-element Vector{Any}:
 CodeInfo(
1 ─ %1 =   dynamic (Compiler.return_type)(f, Tuple{})::Type
└──      return %1
) => Type

There may be other examples, the documentation does not promise much.

It depends on your code. Try Base.infer_effects to see what effects are inferred (may vary greatly with Julia version, almost always the effect inference improves with new releases). Now there’s also InteractiveUtils.@infer_effects. Also see the Compiler.Effects doc string. Usually what you want is either foldable, removable, or both, so you run Base.infer_effects to see what’s missing, and then try to improve on that to reach, e.g., foldable.

1 Like

Note that a :foldable annotation wouldn’t (can’t) help here either because the information literally isn’t known at compile time. As a tfunc though, my understanding is that the return_type call will always be lifted to compile time, so in your example, it’s basically just going to wait till f is known, evaluate it at the type level, and then insert that known value into the resulting function body.

I think the purpose of nortcall is to prevent to compiler from trying to constant fold its own code. Not sure why it’s not safe for the compiler to abstractly interpret or concretely evaluate itself, but, considering the inference is stateful, it’s not exactly surprising either.

I think you’re only considering the case where the compiler is trying to run/compile a call of f directly. A more relevant example might be where the compiler sees that a caller of f is foldable, because someone erroneously marked the f method as nortcall/foldable, and then the compiler goes to constant fold that caller, which includes the run time call into type inference somewhere, as part of the f method. I’m out of my depth, though, just guessing.

This is very useful!! Would love to see this recommendation sitting in Performance Tips · The Julia Language

1 Like

Do you know why this is? Also, what sorts of performance effects might result from applying this?

It’s really surprising to see this:

julia> function kernel!(x::Array, op::F) where {F}
           @inbounds @simd for i in eachindex(x)
               x[i] = op(x[i])
           end
           return nothing
       end
kernel! (generic function with 1 method)

julia> @infer_effects kernel!(randn(32), abs)
(!c,?e,!n,!t,+s,?m,!u,+o,+r)

So !t means it isn’t sure whether the loop terminates. I guess part of the issue is that Arrays are fully mutable from any task?

(Speaking of which, it would be nice if BorrowChecker.jl could exploit immutability of immutably borrowed arrays and enable more compiler effects…)

Ah, I see this is where Base.Experimental.Const comes in handy:

julia> function kernel!(x::Array, y::Array, op::F) where {F}
           @inbounds @simd for i in eachindex(x, y)
               x[i] = op(y[i])
           end
           return nothing
       end
kernel! (generic function with 1 method)

julia> @infer_effects kernel!(zeros(32), Base.Experimental.Const(randn(32)), abs)
(+c,+e,!n,+t,+s,+m,+u,+o,+r)

I think it’s just not implemented yet. Literally any appearance of for or while, unless perhaps the loop gets eliminated as dead code or constant folded, will taint terminates.

As documented in the @assume_effects doc string, foldable requires terminates, so without terminates your code will not constant fold.

No, you’re misinterpreting the effects. The effects are telling you, this code is total, except for nothrow. The reason it is total except for nothrow is that the call just throws right away, I think because you restricted the second method argument to Array, but passed in Const in the call.

:rofl: oops

1 Like

Absolutely agree, however I don’t think the infer_effects interfaces are public yet. That said, they do seem pretty stable, perhaps someone should make a PR.

Relevant issue:

1 Like

Wow, indeed. I also find it really surprising that it’s basically never able to infer such a thing… Even for static loops!

julia> macro finite(ex)
           return esc(:(Base.@assume_effects :terminates_locally $ex))
       end
@finite (macro with 1 method)

julia> function foo()
           x = 0
           for i = 1:10
               x += i
           end
           x
       end;

julia> @infer_effects foo()
(+c,+e,+n,!t,+s,+m,+u,+o,+r)

julia> function foo2()
           x = 0
           @finite for i = 1:10
               x += i
           end
           x
       end;

julia> @infer_effects foo2()
(+c,+e,+n,+t,+s,+m,+u,+o,+r)

What sorts of performance consequences does this have? Is it mainly to prevent unrolling?

As noted, foldable and removable require terminates, so unannotated loops prevent these optimizations (constant folding and dead code elimination).

Pretty sure loop unrolling is unrelated.

1 Like

Thanks.

It would be nice to have some effect generics in Julia so that I can easily declare a bunch of effects for known types, but let Julia figure things out for unknown types (maybe like the ones proposed here: blog).

Basically I’d love to have something like

function foo(ar, y)
    #= ... =#
end

Base.@assume_effects :foldable foo(::Array, ::Float64)

So that the types I use 99% of the time get maximum performance, but if someone passes a weird type, it will fall back to regular inference.

I suppose I could always do something like

function foo(ar, y)
    return _foo(ar, y)
end
Base.@assume_effects :foldable function foo(ar::Array, y::Float64)
    return _foo(ar, y)
end

function _foo(ar, y)
    #= ... =#
end

But it feels a bit unreadable. If I could write up a file src/effects.jl and add all the declarations for known types (similar to src/precompile.jl files) it seems like it would be quite nice.

1 Like

Keno had explored some approaches at some point, see:

NB: with Julia v1.11 and up it’s already possible to apply @assume_effects to a code block, such as a loop, instead of to the entire method.