Marking more standard library methods as pure to enable LLVM optimizations?

LLVM can apply some amazing optimizations to arithmetic code (including most of the techniques in Hacker’s delight) if it knows that various operands are constant. However, many Julia functions are not marked as pure, which means that on top of their inlined content possibly being executed on each loop, the further optimization passes don’t fire.

Taking a simple example, x % 1024 gets optimized by LLVM to cheap bitwise operations (more impressively, for non-powers of 2 it does that as well with some nontrivial bit twiddling with a magic number), but x % 2^10 does not and compiles to a long segment of code. On the other hand, if I define @base.Pure pure_exp(a,b) = a^b , then x/pure_exp(2,10) does get optimized.

So this tells me that there should be some low-hanging fruit in marking more methods of core arithmetic operators as pure. Is it possible to declare a particular set of methods of a function pure without duplicating generic code?


I have a related question about Pure in Julia. Does it apply to whole functions, or only indivdual methods? That is, can I still implement non-pure methods for pure functions in Base?

1 Like

In this particular case of ^, you don’t actually need @pure. It’s just because Base.literal_pow is defined only up to x^3:

It’s actually pretty easy to extend Base.literal_pow to higher order:

julia> f(x) = x % 2^10
f (generic function with 1 method)

julia> @code_llvm f(1)

;  @ REPL[1]:1 within `f'
define i64 @julia_f_17050(i64) {
; ┌ @ none within `literal_pow'
; │┌ @ none within `macro expansion'
; ││┌ @ intfuncs.jl:221 within `^'
     %1 = call i64 @julia_power_by_squaring_7072(i64 2, i64 10)
; └└└
; ┌ @ int.jl:229 within `rem'
   switch i64 %1, label %oksrem [
    i64 0, label %fail
    i64 -1, label %after_srem

fail:                                             ; preds = %top
   call void @jl_throw(%jl_value_t addrspace(12)* addrspacecast (%jl_value_t* inttoptr (i64 139997646836320 to %jl_value_t*) to %jl_value_t addrspace(12)*))

oksrem:                                           ; preds = %top
   %2 = srem i64 %0, %1
   br label %after_srem

after_srem:                                       ; preds = %top, %oksrem
   %3 = phi i64 [ %2, %oksrem ], [ 0, %top ]
; └
  ret i64 %3

julia> @inline Base.literal_pow(::typeof(^), x::Base.HWNumber, ::Val{p}) where p =
           if !(p isa Int)
               x ^ p
           elseif p < 0
               Base.literal_pow(^, inv(x), Val(p))
               *(ntuple(_ -> x, p)...)

julia> @code_llvm f(1)

;  @ REPL[1]:1 within `f'
define i64 @julia_f_17057(i64) {
; ┌ @ int.jl:229 within `rem'
   %1 = srem i64 %0, 1024
; └
  ret i64 %1
1 Like

You can implement non pure methods also, as long as you do so before actually calling it in dispatch. This is what I have done extensively in Grassmann.jl and please don’t discuss my own personal usage of it too much here, I don’t want to have to explain and justify my usage constantly. The reason I use it is similar to what the OP is asking.

Whether to apply it in the OP situation with LLVM I can’t say, since I don’t know all the context internals of it.


A trick I’ve seen used sometimes is:

@eval f(x) = x % $(2^10)

This will ensure that 2^10 is evaluated before the function f is defined.

It would be neat if one could write

f(x) = x % $(2^10)

and that would do the same thing as the above. (Currently, it throws a syntax error.)

Another option is

@generated f(x) = :(x % $(2^10))

wouldn’t you have to ensure that f doesn’t specialize on x and generate a new method for very type of x?

To elaborate, what I’ve done in Grassmann.jl is to define some of my own dispatch such as

module Grassmann
@pure binomial(n::Int,k::Int) = Base.binomial(n,k)

This replaces the Base.binomial method with a local method that is pure.

It is not recommended to make the base arithmetic pure, but you can easily make pure versions of base methods locally by simply redirecting dispatch. This does not affect the global behavior, only local.

1 Like

Why don’t you want new methods created for every type x could take? That’s how regular Julia functions work too, not just generated functions.

If you see that defining more base functions as pure enhances the speed, then create pull requests in the Julia repository. I am sure people will review and merge it if it really affects the performance.


Isn’t that why @nospecialize exists? There are times you might what 2^10 to be evaluated already but don’t want this to change for every possible type of x. @generated f(x) = :(x % $(2^10)) is admittedly a sufficiently simple example that this would likely not be a problem, but I’m thinking about instance where that one line you want solved already is nested 10 lines into a function.

Feel encouraged to make PRs adding @pure to things in Base.
It is one way to quickly learn how it can and can’t be used.
Since people will tell you.

Outside of that context, leave @pure alone.
E.g. Don’t go using it in your package.

Unless you really know what you are doing.
It isn’t the same as what you may think of as pure.
It is far stricter than LLVMs notion of purity.

It requires:

  • method must not have side-effect
  • methods must not depend on global state
  • method must not call any Julia generic functions, including (but not limited too) == and iterate (which means no splitting and no for loops)

You can only call builtins and intristics.
There is an escape clause that you can sometime call generic function methods of you know exactly which one will be called (so all your types are constrained) and know that it it gets overwritten the language will break. (This includes things like !(::Bool) and iterate(::Tuple))

If you violate this and the method for anything you call changes, because say a new more specific overload is called, it can give silently incorrect results. If you are lucky it might segfault.

It is not exported for a reason.
It is not for general use and is basically an implementation detail of the language. Not part of the public API.

The LLVM purity is a different concern.
Not the same.
Handled by a different part of the compiler stack.

We need ways to better communicate things down to LLVM, that is true.