Eval cannot be used in a generated function

I would like to create adhoc a new type inside a generated function by eval some text like

@eval begin
  struct MyType end
end

but this gives the error

eval cannot be used in a generated function

It would really simplify my life a lot if I could create such an adhoc newtype within a generated function
I unfortunately cannot further minimize the example (the real context where this appears is way too complicated for this small question)

Anyone who knows why this is so?
Anyone who knows a workaround?

This is what the documentation says about this:

Generated functions must not mutate or observe any non-constant global state (including, for example, IO, locks, non-local dictionaries, or using hasmethod ). This means they can only read global constants, and cannot have any side effects. In other words, they must be completely pure.

In other words, generating the function must not have any side effect, and creating a new type is a side effect. I think (but am not sure at all) that this is because you don’t control exactly when the function will be generated, nor even whether it will be generated multiple times for the same argument types.

1 Like

Specifically, a generated function is called within the Julia compiler (which is why Cassette, Zygote, CUDAnative, et. al use them), and as such they should be idempotent and must not modify global state. Defining a new type within a generated function would violate both constraints.

What I am confused about is why you need to eval within a generated function. @schlichtanders can you provide some extra details on what exactly you’re trying to achieve? Maybe we can find a better approach to solve whatever problem you’re working on.

1 Like

@generated functions are functions. You can’t define types in functions anyways. This seems like a very weird misuse of Julia types and it would be helpful to know what you’re trying to do.

2 Likes

It sounds like you might want to use a macro? What’s the use case?

I’m not sure why you would need @eval, in julia it’s mostly only used for defining a bunch of similar methods in top level scope, just to keep things DRY.

If you need any metaprogramming at all, you need a macro, not a generated function or eval.

Macros are general tools for generating basically any kind of julia code in any scope.
Generated functions are only for defining functions.

The problem with that, and what you are trying to do is the scope of functions. Defining types, constants or methods for the global method table has to happen at the top (module) level scope. This means that as others have said, adding them in a generated function can’t be possible because adding them in a function isn’t possible:

julia> f(x) = struct A end
ERROR: syntax: "struct" expression not at top level

julia> f(x) = const A = 2
ERROR: syntax: unsupported `const` declaration on local variable around REPL[4]:1

You can do all of these things in a macro, but you still have to think about the scope of the generated AST - defining a type inside a function wont work using a macro either.

3 Likes

This became impossible (otherwise known as safe) in julia 0.5. For the ensuing discussion, see

https://github.com/julialang/julia/issues/16806

Since then, julia has gained various features which make “generated types” seem less necessary. If you can describe your use case in more detail, people here might have some specific suggestions.

6 Likes

Julia 0.4 was the generated function wild west, when all was still possible… (and dangerous).

2 Likes

I am very happy that those days are over, and the importance of generated functions is gradually diminishing. They remind me of the attitude if some experienced Common Lisp hackers, ie that everything can be done with macros and/or the CLOS MOP, which are indeed insanely powerful, at the cost of suboptimal performance, which you then fix with more hacks, until the code becomes very hard to maintain. I think the ideal for Julia is that the language is kept powerful, but in ways that cooperate with the compiler.

3 Likes

I want to ground unionall types to concrete types in order to apply better type inference on them using Core.Compiler.return_type

Concretely I want to use the following equivalent:

  • if a function is defined for a UnionAll type MyType (having only type-parameter) then it should also be defined for the type MyType{ACompletelyNewType}
  • this also seems to hold the other direction

Hence if I am able to create a new type on the fly, I can guarantee type-inference.
It turns out many types define constraints on their typeparameters which also need to be inherit from a possible new type.
I am currently solving this by creating newtypes on the fly with the respective constraints like

struct MyNewTypeNumber <: Number end

The alternative I can think of is to just define all these newtypes beforehand and by convention guarantee that no one uses them (i.e. they truly stay new types)

some extra notes:

I use this type-inference finally in a function which checks whether some function is defined for given input parameters. Think of it as hasmethod2, like hasmethod but being able to deal with forwarding f(args...; kwargs...) = g(args...; kwargs...).
This works like a charm with the created new types :smiley:

Okay, why do I need to have this in a generated function?
It is because I want to do dispatch on the fact whether a function is defined or not.

You cannot do this statically. Jameson said that isapplicable might become inferable sometime, then you could do it, even without generated functions.

If you’re fine with using Julia 0.4, then you should checkout GitHub - mauro3/Traits.jl: Exploration of traits in Julia which can indeed do this (and did not need to define types in the generated functions but did some other wild-west stuff).

3 Likes

Julia’s compiler has gotten so good, that it’s almost never needed to use generated functions for performance, besides working around compiler edge cases.

If you hit such a compiler edge case, you should first create a minimal working example (MWE) that clearly shows why you need a generated functions to get the performance you want.

If you don’t have such a clear cut example, I’d strongly recommend avoiding generated functions.

I’m not sure if you really want to create a new type like that, it sounds like a bad idea (Julia already suffers a lot from slow compilation, and this will just make it much worse).

But if you really must create a new type for type inference, you could do something like this:

struct Wrapper{Data, ID}
    data::Data
end
Base.getproperty(x::MyType, field::Symbol) = getfield(getfield(x, :data), field)
Base.setgetproperty!(x::MyType, field::Symbol, val) = setfield!(getfield(x, :data), field, val)
MyType{ID}(data::Data) where {ID, Data} = MyType{Data, ID}(data)

So you can just increase the ID in the generated function to invoke new specialization.

Btw, if I were you, I’d put a considerable amount of time into benchmarking, designing, creating minimal examples etc…, to really make sure that you need to go down the route you are.

These kind of patterns are almost guaranteed to break with new Julia versions and very hard to maintain, debug etc. So if there is an alternative way that doesn’t rely on Core.Compiler.return_type, eval and generated functions, it should be greatly preferred, even if it may end up with a bit more code or a less magical API.
This has been proven true over and over again in Julia Base and other low level packages :wink:

You cannot do this statically. Jameson said that isapplicable might become inferable sometime, then you could do it, even without generated functions.

If you really need to specialize on the availability of functions and do stuff in the type domain that need generated functions, I recommend isolating those cases to some small basic functions.

E.g. do:

# pure may be enough, if you already do fishy stuff ;) If not, you could also use a generated function here.
# of course this is obviously not pure, since isapplicable can change with every call
# but I guess you will be fine to have this less dynamic for increased performance
Base.@pure static_isapplicable(f, args...) = isapplicable(f, args...)

function myfunc(f, args...)
    if static_isapplicable(f, args...)
         ....
    else
         ...
    end
end

Instead of:

@generated function myfunc(f, args...)
       if isapplicable(f, args...) # doesn't work for types, but lets keep the example simple
             return quote ... end
       else
             return quote ... end 
       end
end

This way it becomes more isolated and a bit easier to maintain - and if e.g. Base.isapplicable becomes inferrable, you can just swap those out. You can also move stuff into the type domain by returning Val(SomeConstant), which you could create in a generated function, and then you’re able to use those results in normal functions, while being able to fully specialize on it :wink:

4 Likes

Eh, as someone who writes lots of generated functions for performance, I disagree.

To be fair, it has been a long time since I actually tried not using generated functions in those cases.
And you need to be doing something deliberate, eg applying some computational kernel that’s based on SIMD width. In those cases, 2x+ better performance is typical.

1 Like

Did you just immediately nullify your disagremment? :smiley: I am, after all, talking about recent changes that make this possible.
And of course, there are still valid use cases. But, as someone who also has been writing lots of generated functions, I was able to cut down a lot of my generated functions and seldomly write any new ones!

3 Likes

I think Mauro is spot on with the suggestion about traits.

I guess what you’re trying to do here is emulate a missing language feature: the ability to dispatch on which abstract interface a given type satisfies. I don’t have a solution for you, but you might find the following extended discussion interesting:

https://github.com/JuliaLang/julia/issues/6975

1 Like