Constant propagation vs generated functions

Before getting started in Julia, I did most of my work in Haskell. It’s a great language, but a few things kept causing me trouble:

  1. The strong PL focus of the community was great, but it seemed to come at a cost of concern about numerical algorithms. I really need both.
  2. The type system is amazing, but PPL tends to push this to its limits. Doing anything new seemed to turn into a type theory research effort, which isn’t really my thing.
  3. There’s a “Template Haskell” for metaprogramming, but it’s not very integrated into the language itself, it’s not type-safe, and its use is often discouraged.

I feel like moving to Julia has mostly solved these problems. (1) has just been amazing. For (2), Julia’s type system isn’t quite as expressive as Haskell’s, but for the most part it seems helpful without being so constraining. And generated functions seemed to entirely solve (3).

From my experience using Haskell, I came to see the best way to get performance as “get as much information to the compiler as possible”. Types are the way to do this. The compiler is required to figure out the types before execution, so we can be sure the result is known statically. It’s just beautiful.

In Julia, my need for staged compilation led me to learning about generated functions. These are just amazing. The type system isn’t quite as powerful, but with generated functions you can pass information to the compiler much more directly. “When you see these types, generate this code”. Until relatively recently, I viewed this as a way to help the compiler. It still has the same job to do, but you get the chance to give it a push in the right direction. This is a huge amount of control, and I found it really elegant.

Recently – maybe in the last year or so – I’ve been hearing a lot of pushback against this idea. I hear a lot of talk about the optimizer and constant propagation, and recommendations to avoid using the type system too much.

This is kind of disheartening, in part because it breaks my mental model, but also because it doesn’t offer a replacement. The optimizer seems very magical and difficult to reason about. I’ve always understood staged compilation to require generated functions, which in turn requires types. But now, I’m really not sure what to think. We have some information in the types and some in the optimizer. It’s not at all clear what goes where. And we can program on types, but I’ve never seen a way to query the optimizer. And the only way I see of enforcing constraints on the optimizer is @assume_effects. That seems interesting, but it’s very new, it’s not clear how robust it is, and it comes with this:

  │ Warning
  │
  │  Improper use of this macro causes undefined behavior (including crashes, incorrect answers, or
  │  other hard to track bugs). Use with care and only if absolutely required.

Here’s an example of the kind of thing I’m doing, adapted from KeywordCalls.jl:

julia> @generated function keysort(nt::NamedTuple{K}) where {K}
           sorted_K = tuple(sort(collect(K))...)
           :(NamedTuple{$sorted_K}(nt))
       end
keysort (generic function with 1 method)

julia> keysort((b=1, a=2))
(a = 2, b = 1)

Because of the way this is set up, we can guarantee that the sort happens at compile time. Here’s how the compiler handles it:

julia> @code_typed keysort((b=1, a=2))
CodeInfo(
1 ─ %1 = Base.getfield(nt, 2)::Int64
│   %2 = Base.getfield(nt, 1)::Int64
│   %3 = %new(NamedTuple{(:a, :b), Tuple{Int64, Int64}}, %1, %2)::NamedTuple{(:a, :b), Tuple{Int64, Int64}}
└──      return %3
) => NamedTuple{(:a, :b), Tuple{Int64, Int64}}

Just beautiful. For Soss and Tilde, I do this sort of thing all over the place. Models are lifted to the type level, and “primitives” generate code specialized for a particular model.

This can sometimes get tricky, but luckily there are libraries to support this style. MLStyle.jl is great for manipulating ASTs, and GeneralizedGenerated.jl lets you extend generated functions to allow closures, along with some utilities for more easily lifting things to the type level.

There’s lots more of this. In measure theory, we often define a measure in terms of a log-density over another measure. That second measure is the base measure. So for example,

julia> basemeasure(Normal())
0.3989 * Lebesgue(ℝ)

julia> typeof(ans)
WeightedMeasure{Static.StaticFloat64{-0.9189385332046728}, Lebesgue{MeasureBase.RealNumbers}}

Here we have a Float64 represented as the type-level, using Static.jl. This has to be static so we can do things like

julia> m = For(1:10) do j
           Normal(j, 1)
           end
For{Normal{(:μ, :σ), Tuple{Int64, Int64}}}(j->Main.Normal(j, 1), (1:10,))

julia> basemeasure(m)
0.0001021 * For{Affine{(:σ,), Lebesgue{MeasureBase.RealNumbers}, Tuple{Int64}}}(#13, (1:10,))

MeasureTheory.jl recognized that the components of m have base measures of the same type. That happens to be a singleton type, so we know the base measures are exactly the same. Because of this, we can factor out the constant, making things much more efficient.

The point is, I’ve built up lots of infrastructure around this approach. It may be that somehow using the optimizer for this could work better, but it’s not at all clear to me what that would look like. In contrast, the current approach is very clear and concrete.

Hearing suggestions like “avoid types” or “avoid generating functions” leave me a little worried about the future of this approach and Julia’s support for it. I’d appreciate any guidance on where things are going, the role of generated functions compared to that of the optimizer, and what it even means to get the optimizer to help in situations like the ones I’ve described.

33 Likes

While I think I can see the value of constant propagation, that does sound a bit worrisome to me. Out of curiosity, who exactly has been suggesting these sort of things?

1 Like

I’ve heard this from a few different people. To be clear, I don’t hear anyone suggesting to avoid types and generating functions altogether. It’s more that I’ve seen enough “let the optimizer handle that” suggestions to see this as a general trend.

The optimizer and the type system can both be used to inform the compiler about invariants. My concern is that the optimizer seems very opaque, and it’s not at all clear how programming it really works. It’s great as a magical black box to make a given program better. But unlike the type system, I don’t see ways to use it for creating programs in the first place.

Anyway, I’m not going to give names, because I don’t want to put anyone on the spot or risk taking things out of context. But this topic seems very much like an “elephant in the room”. I’ve talked to a few others with similar concerns (again no names – I’ll let them speak for themselves). I really hope some core devs will help us out so we can better understand how to think about problems like this, and the direction the language is going.

2 Likes

For me the easiest-to-explain reason to avoid generated functions: we don’t track backedges in the generator. (Nor in macros, etc.) So if you use generated functions, you can’t always Revise your code. That’s less of an issue if you use other approaches. A second reason is that sometimes the compile latencies are longer with generated functions (but sometimes they are shorter, it just depends). A third reason is that we all want the compiler to be so awesome that you don’t need generated functions, but I don’t think that’s in our immediate future. But transitioning from generated to ordinary functions when the compiler can support them is not a bad thing. Julia’s release process with PkgEval runs should :crossed_fingers: prevent serious regressions if you have tests that enforce the outcome you want.

That said, I don’t see a reason to be dogmatic about it.

With respect to avoiding types: I can’t imagine why, as the type system is one of the best things about the language. That said, one can take it too far: in a numerical algorithm, for example, one should adhere to the identity that 3 == 3.0 and ensure you’re not writing code that returns wildly different answers for different representations of the same thing. When you introduce new types, it’s also not uncommon to go through a period of “fleshing out” support for them. This is especially true of types that are not purely internal.

10 Likes

Is there a fundamental reason for that? I couldn’t even find an issue on julialang.

Well, there’s a difference between “methods used by the generated code” and “methods used to generate the code.” We only add backedges for the former. This is not so different from the fact that we also don’t track backedges to types, which is why we can’t modify type definitions. Were we to add backedges to types, the amount of memory required for a Julia session would probably grow dramatically.

We also don’t add backedges to the compiler. Hence we don’t recompile the entire system image if one modifies a compiler method.

5 Likes

Thanks @tim.holy . Any thoughts on the tradeoffs of representing statically known invariants using the type system vs using the optimizer? I know how to use types, and I can dispatch on them to easily optimize special cases. With the optimizer, I’m not even sure what that looks like, or how I should be thinking about the design space.

3 Likes

My sense is that the regression would be a performance regression mostly, and those typically aren’t caught by PkgEval, right?

In general, @cscherrer’s point resonates a lot with me: relying on the optimizer to me is very opaque. I have no idea what kind of thing it is able to do, in cases where I have tried to rely on it I mostly end up starting at @code_lowered output and fiddling around until I have a sense that things are OK (but of course none of that is really continually validated as Julia or my code updates). For me generated functions often are really much, much easier to reason about. I have many situations where I can generate code that I know will run fast because the structure is really trivial. Overall the entire dev process seems much less guessing and poking around with generated functions, at least if one is not one of the few folks that understand the compiler in-and-out.

6 Likes

The sorted keyword example is very difficult to imagine having the optimizer handle—this is a great use-case for generated functions. Your WeightedMeasure is probably more doable. I’m not sure why you’d need StaticFloat64 since Float64s can be used as type-parameters. Your construction of m looks like a lazy-tuple, and checking whether all elements of a tuple have the same type is easy (is it an NTuple{N,T}?). But these are just rough impressions.

You can test things like @allocated(some_expr) == 0 and @inferred(some_expr). That might catch most issues. If you need to guarantee that all the internals are also inferrable (@inferred only checks the final return type), you could use JET.report_opt in your tests. I think that would probably catch most optimizer issues, although it wouldn’t check the “quality” of the generated code.

That would work if the weight were always static. But that’s really just a special case. WeightedMeasure looks like this:

struct WeightedMeasure{R,M} <: AbstractWeightedMeasure
    logweight::R
    base::M
end

We always need to be able to take the logweight field and do arithmetic with it. It would be possible to have a separate struct for the static case, but it’s much more convenient to handle them at once. For the case above we then have R == Static.StaticFloat64{-0.9189385332046728}, and MeasureTheory can recognize statically when a collection of measures have the same static weight. Everything works out great.

Well, I guess the one thing that doesn’t work out is that Static v0.6 has lots of invalidations, which v0.7 tries to solve by just making every static type <:Number. This breaks a lot of dispatch. As I understand, the best way to fix this would be to have the v0.6 hierarchy and have Static absorbed into Base.

cc @Zach_Christensen

That would have indeed invalidated use of anything besides generated code in prior versions of Julia, but the optimizer is getting to be pretty amazing:

julia> struct MyType{T} end

julia> x = MyType{3.14}()
MyType{3.14}()

julia> y = MyType{2.71}()
MyType{2.71}()

julia> f(x::MyType{S}, y::MyType{T}) where {S, T} = MyType{S+T}()
f (generic function with 1 method)

julia> f(x, y)
MyType{5.85}()

julia> using Test

julia> @inferred f(x, y)
MyType{5.85}()

:sunglasses:

Even cooler:

julia> g(x::MyType{T}) where T = MyType{log(T)}()
g (generic function with 1 method)

julia> @inferred g(y)
MyType{0.9969486348916096}()

This is because Oscar and others have written so many math functions in Julia, and because of really phenomenal work by quite a few people (Keno, Shuhei, Jameson, Jeff, and others) on inference.

11 Likes

StagedFunctions.jl was one attempt to fix that.
It was made by NHDaly during the JuliaCon Hackathon in 2019. Its based on the same techneques as Tricks.jl (also made there at the same table together).
but its more complex because it needs to handle dynamic dispatch (which we also don’t create backedged to becuase normally we don’t need them)

2 Likes

This is very nice, and would solve the problem if I had a small set of functions I knew I’d be calling. But MeasureTheory users might apply any function to the log-weight. Though the common Julia recommendation is to use type constraints only for dispatch, many common packages still use them to restrict the domain of a function. This forces numeric-like things to actually fit into the type hierarchy.

1 Like

Both constant propagation and generated functions each have problems for this specific use case. Constant propagation isn’t guaranteed to happen, which you’d want here, and generated functions require lifting things to the type domain, which may be expensive for recompilation here.

To me it seems a little like you want something like comptime in zig? As in, you want a guarantee that something is a compile time constant that the compiler must propagate further and you have semantic guarantees that you require something to be known at compile time.

4 Likes

For the MeasureTheory example, the weight can only be pulled outside if the weights of the components are identical. If not, we just end up with another For product measure, which would have a different type than this. Unless we’re ok with type instability, I think this requires the weights to be represented at the type level.

I should emphasize that these are just a couple of examples, and there are plenty more. The big concern isn’t about how I could rewrite these particular examples, more about getting a better understanding of the design space and where the language is going. I’m also a little concerned about type-level representations losing favor, and I hope these examples can help to show why this approach is important.

1 Like

So we can just do math in the type system now? Looks amazing!
mindblown

1 Like

It’s fine to keep using generated functions if they’re working well for you. After all, we can’t exactly remove them :slight_smile:
We use them in Base for cases just like your keysort function. But, ideally this function could be written as

keysort(nt::NamedTuple{K}) where {K} = NamedTuple{sort(K)}(nt)

plus a declaration that it is ok to call sort at compile time (we don’t have a sort method for tuples, but we could). The difference is that for this function you don’t really need to generate code, you just need the compiler to treat specific functions as pure. We only very recently gained the ability to make such declarations (@pure was not widely usable since it was not specific enough). I think this style has many advantages:

  • We can generate a generic implementation that works entirely at run time, e.g. in the interpreter, if we have to.
  • It can be inferred on abstract types (generators can only be called on concrete types).
  • There is no quoted code, which can get hard to read in more complex cases.

So I think you can keep using generated functions, plus related packages like GeneralizedGenerated, but gradually start experimenting with the pure-function style when it makes sense. As for whether we would deprecate @generated in a v2.0, I have no idea. It would make our lives easier in some places if the feature didn’t exist, but there is not yet a full-fledged conspiracy to eliminate it.

19 Likes

Only if you include them in your standard test. I think Symbolics.jl do that.

Thanks @jeff.bezanson . These are good arguments – you make it very easy to see the appeal of this approach.

Right, so currently we enforce static computation using the type system, but in 1.8 and later there will be other ways to transition toward leaning on the optimizer for this, at least in some cases. That makes sense.

I hadn’t even considered that @generated might ever disappear. I’m thinking more of type-based programming in general. Types are familiar, robust, and easy to reason about. To the extent the current applications of types become the job of the optimizer, I hope that change can be gradual, and that we don’t lose any functionality along the way.

But since you bring it up, for me generated functions are one of the main draws of Julia. I can do staged programming (thanks to GG), write compiler-like code without digging into the Julia compiler, easily get rid of type instabilities, and know that the code I write will be executed statically. It’s hard to imagine doing this work without them.

15 Likes