Turning off type inference?

Is there a way to turn off type inference? I have a function that’s too keen on making union types that aren’t actually helpful, which causes compilation to take forever (in some cases it seems like I mean literally forever, as I end up ^Cing).

I had been under the wrong impression that adding ::Any inside a function body or return argument would cause the compiler to lose the type information, but turns out it just checks the type while “remembering” the actual type:

julia> f(x)::Any = x
f (generic function with 1 method)

julia> @code_warntype(f(1))
Variables
  #self#::Core.Compiler.Const(f, false)
  x::Int64

Body::Int64
1 ─ %1 = Main.Any::Core.Compiler.Const(Any, false)
│   %2 = Base.convert(%1, x)::Int64
│   %3 = Core.typeassert(%2, %1)::Int64
└──      return %3

julia> f(x) = x::Any
f (generic function with 1 method)

julia> @code_warntype(f(1))
Variables
  #self#::Core.Compiler.Const(f, false)
  x::Int64

Body::Int64
1 ─ %1 = Core.typeassert(x, Main.Any)::Int64
└──      return %1

::ANY was changed to @nospecialize

https://docs.julialang.org/en/v1/base/base/index.html#Base.@nospecialize

1 Like

Hmm, @nospecialize seems to be ignored more often than not…

@nospecialized turns off specialization. It does not turn off inference.

2 Likes

This seems to do what I want, is it a good idea or is there something better?

struct Uninfer 
    x
end

uninfer(x) = Uninfer(x).x

f(x) = uninfer(x^2)

It’s a compiler limitations.

I don’t suppose JuliaInterpreter has a @interpret macro…

I believe your code there is not always guaranteed to block type inference. Perhaps a better strategy would be to use Ref{Any}, as in

uninfer(x) = Ref{Any}(x)[]

f(x) = uninfer(x)

julia> Core.Compiler.return_type(f, Tuple{Int})
Any

This strategy essentially puts your data in a pointer while purposefully telling julia that there could be anything in the pointer and then immediately dereferences the pointer.

My code seems to work but I like your suggestion better because it doesn’t involve a new type.

It does indeed: Home · JuliaInterpreter.jl

Have you tried that?

It’s exactly the same so it "guarantee"s blocking inference as much as the other one.

Oh I thought it interprets everything, that is, it doesn’t go back to compile for calls within a block unless they are whitelisted. It also doesn’t mention usage in packages

FWIW I tried @interpret one case which was narrow enough to only have light computation. It did fix the compile time issue: roughly 0.5s instead of 8s. Unfortunately it eviscerated the run time: 1s instead of 0.0004s. So it’s a no go.