Alternative to redefining a @generated function to observe (subsequently static) state

Brace yourselves - I am about to commit code to my package that calls eval to redefine a @generated function so that it can observe global state (that will normally not change after the redefinition).

I actually think this is the best way to proceed, but all of these techniques are usually signs of bad design, so I want to see if anyone has any other suggestions (see question at bottom).


The context is POMDPs.jl. This is a complex package since it has two user classes: problem writers, who define methods for functions in the interface, and solver writers who use functions from the interface. In addition, there are really two interfaces: the explicit interface that deals with probability distributions, and the generative interface that deals with only simulated samples. See for more info.

I am trying to allow other packages that extend the interface to “hook into” the automatically synthesized functions of the generative interface. My example is constrained MDPs/POMDPs, in which, in addition to the primary reward, there is another cost (denoted with c) that must be constrained.

The gen function can sample a tuple of various random variables from the model, for example, a reward, :r, observation, :o, or next step, :sp. This issue has more information:, and docs that explain the new functionality can be found here:

Cartoon Version

Here is a cartoon version that has the main features of what I am implementing:

module POMDPs
    export gen

    # in reality genvar_registry contains a lot more than just Functions
    genvar_registry = Dict{Symbol, Function}()

    gendef = quote
        @generated function gen(::Val{symboltuple}, m, s, a) where symboltuple
            # in reality, genvar_registry and symboltuple are used to create something more complicated.
            func = genvar_registry[first(symboltuple)](m)
            return quote
                return ($(func)(m, s, a),)

    function add_genvar(name, func)
        genvar_registry[name] = func

module ConstrainedPOMDPs
    import Main.POMDPs

    function cost end

    POMDPs.add_genvar(:c, M->cost)

using Main.POMDPs
using Main.ConstrainedPOMDPs

struct MyPOMDP end
ConstrainedPOMDPs.cost(::MyPOMDP, s, a) = abs(s) + abs(a)

@show gen(Val((:c,)), MyPOMDP(), 1, 2)

Real Version

The real version of gen is implemented here: (currently it does not have the eval part).


Is there a better design to accomplish this that avoids the eval?

I think the root of the issue here is the global genvar_registry. Is that really a good idea? It reminds me of old versions of JuMP that had macros like @variable x rather than @variable model x, where in the former case information would be stored in some opaque global variable. But that means you can’t have two completely independent models in the same session, and results in issues with thread safety. So I think that’s the origin of the code smell and the need for all these acrobatics in this case as well.

1 Like

@tkoolen thanks for the response! Yeah, I would like to come up with an alternative to the genvar_registry, but I haven’t been able to.

Fortunately, it won’t have the same problems as you mention in JuMP because “genvars” represent a different concept from variables in an optimization problem. Instead, they represent nodes in the Dynamic Bayesian Network that defines the MDP/POMDP; they define the structure for a class of problems. Since they aren’t parts of a specific model, a problem writer will never use add_genvar. add_genvar will normally only be called from within modules that add additional concepts for expressing more exotic (PO)MDP-like problem structures. Thus, all of the evaled code will be executed at module-load time, and I don’t think there will be performance/threading issues.

I would do as much as possible to get rid of the genvar registry. Wouldn’t adding additional concepts be better done by adding new types that are passed in, and custom methods that dispatch on them?

Thanks @Raf - that is a good suggestion. It is getting me further in my thinking. But I would really like to be able to accommodate the following shorthand for problem writers to use:

gen(::MyPOMDP, s, a, rng) = (sp=s+a, r=s^2)

That means I have to translate the names from the returned NamedTuple into an object of one of the new types. Is there a way to do that without some sort of a registry?

(see for more context)

I guess maybe the POMDP type needs a DBNStructure trait. I think that could potentially replace the registry (I really wish there was a first-class trait concept in julia).

You can pretty much always do this kind of thing with types and methods. Types can have fields, POMP types can have required interface methods, ie they must define a constructor that takes NamedTuple. Etc etc.

Why not lift it into the type domain right away, and do away with named tuples?

abstract type GenVar

gen(tup::NTuple{N, GenVar}, m, s, a) where N = ntuple(i->gen(tup[i], m, s, a), N)

#Implementors must do stuff like
struct MyGenvar <: GenVar end
gen(::MyGenvar, x::SomeModel, s, a) = ...

Then users will need to call things like x, y, z = gen((X(), Y(), Z()), m, s, a).

That also avoids name-clashes if two downstream packages define genvars with the same name.

@tkoolen @Raf @foobar_lv2, thanks for responding and advising me away from this! For anyone happening on this thread, we ended up encoding the information that we needed as a trait of the POMDP model type, so there is much more flexibility and no broken @generated rules.

1 Like