Discussion on Base.@pure usage

[edit 21/03/07 discussion in this thread is about correct @pure use, triggered by the @pure annotation in my initial question about NamedTuple type processing “with zero runtime costs”, which follows]

I have a custom Base.getproperty implementation for a quite generic type PStruct, which uses reflection technique on type parameters given in a NamedTuple type.
It works, but slow (>factor 1000 slower than hand-coding its methods without runtime reflection).

I am asking for help in coding the function in a way that runtime reflection “compiles away”.
My topic is somewhat similar to this discussion: efficient-reflection-on-structs, however I could not successfully adopt the solution hints given there.

Code snipped with the function in question (full code is available as Julia package, see below):

primitive type PStruct{T<:NamedTuple} 64 end

Base.@pure function Base.getproperty(x::PStruct{T},s::Symbol) where T<:NamedTuple
    @inbounds begin
        shift = 0
        types = T.parameters[2].parameters
        syms = T.parameters[1]
        idx = 1
        while idx <= length(syms)
            type = types[idx] 
            bits = bitsizeof(type)
            if syms[idx]===s
                v = _get(reinterpret(UInt64,x),Val(shift),Val(bits))
                return _convert(type,v)
            shift += bits
            idx += 1

# used helper methods look like:

@inline _get(pstruct::UInt64, shift, bits) = (pstruct>>>shift) & _mask(bits)
@inline _get(pstruct::UInt64, ::Val{shift},::Val{bits}) where {shift,bits} = (pstruct>>>shift) & _mask(bits)

@Base.pure bitsizeof(::Type{T}) where T = sizeof(T)*8
bitsizeof(::Type{Bool}) = 1
# more methods of bitsizeof exist, very similar structure

_convert(::Type{type},v::UInt64)            where type          = convert(type,v)
_convert(::Type{UInt64},v::UInt64)                              = v                 # to avoid ambiguity
_convert(::Type{type},v::UInt64)            where type<:Signed  = (v%Int64)<<(64-bitsizeof(type))>>(64-bitsizeof(type))
# more methods of _convert exist, very similar structure

To improve, I have put all reflection work in a @pure function _fieldsdescr having only type parameters.
In theory, dispatch could recognize that any method of _fielddescr returns a constant value, and replace its body by that constant.

However, runtime performance improves only a little. Apparently, my logic is too complex for the compiler to replace _fielddescr calls by constants. Code:

@Base.pure function _fielddescr(::Type{PStruct{T}},::Val{s}) where {T<:NamedTuple,s} # s isa Symbol
    shift = 0
    types = T.parameters[2].parameters
    syms = T.parameters[1]
    idx = 1
    while idx <= length(syms)
        type = types[idx]
        bits = bitsizeof(type)
        if syms[idx]===s
            return type,shift, bits
        shift += bits
        idx += 1

@inline Base.@pure function getpropertyV2(x::PStruct{T},s::Symbol) where T<:NamedTuple
    type,shift,bits = _fielddescr(PStruct{T},Val(s))
    return _convert(type,_get(reinterpret(UInt64,x),shift,bits))

I wrote some simple benchmarks, calling both variants in a loop on a Vector{PStruct}. For comparison, I run the same loop on a mostly equivalent struct, and I wrote getproperty variants for the concrete types and symbols, replacing _fieldtype call by its return value.


@btime bench(sv): some work on an ordinary struct, in a loop on a Vector to get stable timings
  80.144 ns (0 allocations: 0 bytes)
@btime bench(psv): same work on PStruct having same fields as struct in preceding benchmark
  2.457 ms (412 allocations: 6.44 KiB)
@btime benchV2(psv): same work, but using getpropertyV2 instead of getproperty for PStruct field access
  304.299 µs (400 allocations: 12.50 KiB)
@btime benchV3(psv): same work, but handcoded getpropertyV3 replacing _fielddescr call by its result (simulated constant propagation)
  115.026 ns (0 allocations: 0 bytes)
@btime benchV4(psv): same work, but handcoded getpropertyV4 with resulting SHIFT and AND operation
  113.758 ns (0 allocations: 0 bytes)

Ideal solution: there is a trick to code _fielddescr in a way that the compiler replaces calls by their result, keeping clear, maintainable julia source code.

Acceptable alternative: reformulation of getpropertyV2 (or _fielddescr) as @generated function, where _fielddescr(PStruct{T},Val(s)) is called at compile time, and its result is pasted into generated code. However writing that is beyond my current level of macro/EXPR knowledge - maybe someone can help?

A surprising observation in the benchmark results are the allocations, their count mostly matches the getproperty calls (400 per benchmark run). I do not see any allocation in the code (it does not use heap objects) - did I overlook something?

All code is available here: PackedStructs

PStruct and its functions are defined in src/PackedStructs.jl, test/basics.jl contains examples and benchmarks.

1 Like

I am almost sure you cannot use @pure in such cases.

A pure function can only depend on immutable information. This also means a @pure function cannot use any global mutable state, including generic functions.

length is a generic function, as is getindex (that you use by [] indexation), and reinterpret.


Thanks for the hint. I will remove @pure annotations - it was a fruitless attempt to persuade compiler to do constant propagation, leaving them in the code could cause trouble in the long run.

@pure doc states “Double check for calls to generic functions.” - their use is discouraged but not strictly forbidden in a @pure function. I am pretty sure getindex, length and reinterpret are safe to use in my code sample, because I call them only with parameters of very basic system types.

My understanding of a pure function is “no side effects, result depends only on function parameters”. Though I expect it holds for my @pure annotated functions in normal use, I cannot guarantee it. If someone defines a type MyTypeChangingGlobalState and increments a global counter in every call of bitsizeof(MyTypeChangingGlobalState), it causes side effects, and if its return value is not constant, it breaks my code.

I found this discussion clarifying dangers and benefits or @pure.

1 Like

Where did you get this information? The documentation of @pure (as of version 1.5.3) is very explicit against using any generic functions. The problem is not the parameters with you call the functions, or what the functions themselves do, but the fact all generic functions live in a global state table, and therefore are forbidden by the no global state rule.

The definition of what is @pure in Julia is much stricter than any other languages I know. In fact, at some in the forums we entertained the joke idea of putting in the @pure documentation a list of one to three people (from the Julia committers) that were the only ones allowed to use it, because they are the only ones that really understood what @pure meant.

Why were you not able to follow the advice in the first post you linked?

“Double check for calls to generic functions.” is cited from @pure doc here. Maybe it-s a misunderstanding of mine: I interpreted it as “double check any use of generic functions in @pure annoted code, and be really really sure that their use is pure, i. e. does not alter global state and always returns the same for the same parameters”.

What exactly is “use of a generic function”, and how strict is @pure in julia? Consider

@pure f() = 1+1

Do you agree this is an incorrect use of @pure? julia doc states:“Every function in Julia is a generic function”. Base.:(+) is a julia function.

Or is the use correct, because “1” is guaranteed to be an expression of type Int, and “1+1” compiles to a call of Base.:(+)(x::Int, y::Int) which is known to be a pure function?

In the linked post, a copy loop is to be optimized. For all keys of a NamedTuple, a property is copied. Optimized result is still a sequence of copy operations. In my case, I need to optimize away the whole loop (or recursion).

I adopted the tip of Tim Holy “You could try doing the recursion in the type domain, Tuple{:a, :b, :c} using Base.tuple_type_head and Base.tuple_type_tail.” to my case.

_fielddescr then becomes:

@inline function _fielddescr5(::Type{PStruct{T}},s::Symbol) where T <: NamedTuple
    _fielddescr5(Tuple{T.parameters[1]...}, T.parameters[2],s,0)

@inline function _fielddescr5(::Type{syms}, ::Type{types},s::Symbol,shift::Int) where {syms <: Tuple, types<:Tuple} 
    syms===Tuple{} && throw(ArgumentError(s))
    type = Base.tuple_type_head(types)
    if s===Base.tuple_type_head(syms)
        return type, shift, bitsizeof(type)

@inline function getpropertyV5(x::PStruct{T},s::Symbol) where T<:NamedTuple
    type,shift,bits = _fielddescr5(PStruct{T},s)
    return _convert(type,_get(reinterpret(UInt64,x),shift,bits))

Benchmark results show recursion variant is quite good, but still about factor 500 slower than full constant propagation. I also tried a variant V6 with symbol parameter wrapped in Val and @inline assertions, it did not matter.

@btime bench(sv): some work on an ordinary struct, in a loop on a Vector to get stable timings
  80.143 ns (0 allocations: 0 bytes)
@btime bench(psv): same work on PStruct having same fields as struct in preceding benchmark
  2.487 ms (408 allocations: 6.38 KiB)
@btime benchV2(psv): same work, but using getpropertyV2 instead of getproperty for PStruct field access
  428.500 μs (808 allocations: 18.88 KiB)
@btime benchV3(psv): same work, but handcoded getpropertyV3 replacing _fielddescr call by its result (simulated constant propagation)
  116.484 ns (0 allocations: 0 bytes)
@btime benchV4(psv): same work, but handcoded getpropertyV4 with resulting SHIFT and AND operation
  113.245 ns (0 allocations: 0 bytes)
@btime benchV5(psv): like V2, but recursive _fielddescr using Base.tuple_type_head and Base.tuple_type_tail in getpropertyV5
  69.999 μs (96 allocations: 1.50 KiB)
@btime benchV6(psv): like V5, but symol wrapped in Val like in V3 and V4 and @inline assertions
  69.999 μs (96 allocations: 1.50 KiB)

Code including these variants is now in Github.

The same documentation says:

This also means a @pure function cannot use any global mutable state, including generic functions. Calls to generic functions depend on method tables which are mutable global state.

In other words, the problem is not that generic functions may access global state (e.g., if they access global variables), they are global state.

Yes. This is clearly incorrect use of @pure. What you are missing is that anyone can redefine Base.:(+)(x::Int, y::Int). A package can just change what the multiplication of Ints do in Julia when loaded, and Julia will smartly recompile all functions using Base.:(+)(x::Int, y::Int) to consider its new meaning. If you used @pure, Julia would not bother keeping a pointer to that function to remember to recompile it if + changes, and when the function is called after such package is loaded, it will probably give the old and wrong result.

Hmmm, the docs fail at clarity in this point. The definition of generic used at this point seem to be that functions have methods and specializations and are, therefore, generic (or it is referring only to the methods defined by users). However, the distinction used by @pure is the distinction between functions that are printed as generic function or built-in function in the REPL. See:

julia> (+)
+ (generic function with 276 methods)

julia> isa
isa (built-in function)

Different from generic functions, built-in functions does not have the concept of methods (so you cannot extend the function with new methods or replace ), so they are truly pure (in the @pure sense): they do not live in a global table that may be updated.

Base.:(+)(x::Int, y::Int) is a method, not a function, the function is Base.:(+), the body for a specific set of types is a method. Only a function can be pure, not a method, as we already gone thought Base.:(+) is not pure, as it is a generic function and can have its methods extended/replaced.


Thanks for all your elaborated explanations.

I have learned a lot about @pure. I understand using @pure induces potential risks far away from its own code, if any generic function is used. Your sketch of impacts on method tables is really interesting.

On the other hand, you cannot do much without any (generic) function. I did a quick scan in \julia-1.5.3\share\julia\base on @pure use:

Not surprisingly, all occurrences are incorrect use: in every listed function, I found at least one use of a generic function (one of max, abs, setprecision, big, Base._nt_names, unsafe_string). Do you know of any correct use in julia’s standard lib?

I am not sure what special cases these functions in Base fall that they can use @pure, the documentation is clear that generic function should not be used, and I have already queried people more involved in Julia development before and they said more or less what I replicated here. I notice that many such functions work over types (or bit types like Int and Symbol encoded in type space), different than the f() you proposed (that had hardcoded literals), but I am not sure if this alone should allow for use of @pure; or if the Julia developers thought “if someone ever redefines this the whole thing will be broken anyway so the benefit outweighs the costs”, or yet, “if someone ever changes the meaning of this function we will surely want the old meaning, the code is not wrong by not getting update as it should”.

It pains me, but seems my knowledge is insufficient here, I will bother people more knowledgeable than myself: @JeffreySarnoff sorry to bother you but can you have a look here? @rryi makes a good point that many uses of @pure in Base seem to be dissonant from what the documentation advises against (i.e., the use of generic functions).

I have removed the end of my last post, it was not constructive.

Type piracy is too serious for jokes like that (and do not execute my 1-liner overloading + - it is very good in crashing the julia instance and might harm your julia installation).

I really love julia for its openness - yes, you can redefine every function. Julia trusts its users and allows very dangerous things, in contrast e.g. to java (I have abandoned java when they barricaded access to sun.misc.unsafe). Users are considered mature and responsible by the language.

Type piracy introducing new methods for existing types of other packages is dangerous. Type piracy redefining methods in other packages breaks code.

Concerning safe use of @pure, I assumed calling methods in @pure functions is safe, as long as these methods are never redefined. My example was +(Int,Int) - a very basic method on which almost all packages rely, e.g. by use of iterators over Array-s. max and abs comes next - also extremely widespread use, redefinition for primitive types very very unlikely. I would agree that the other method calls found in @pure annotated julia Base functions are also very unlikely to be redefined. If my assumption is correct, it justifies use of @pure in julia Base.

But it is very likely that method tables of these widespread functions like +, max, abs are changed by user packages. Does @pure induce risks only in type piracy cases, or also on “legal” enhancements? Some pseudocode to clarify:

module Base
    abstract type AT1 end
    const CT1 = someConcreteSubtypeOfAT1()
    f(x::T) where T <: AT1 = EXPR1

module MyModule
    @pure function g()
        local x::Base.CT1

module ThirdParty
    abstract type AT2 end
    Base.f(x::T) where T<:AT2 = EXPR2

module TypePiracy
    Base.f(x::Base.CT1) = EXPR3

According to your argumentation, f gets (probably) ill-defined if module TypePiracy is loaded after MyModule:some code portions use the redefined method in TypePiracy, some the original Base version.

Loading module ThirdParty changes the method table of f, it adds new entries. The compiler walk through the method table when looking for f(::CT1) might change, but the result of the lookup is unchanged. If @pure in MyModule caused keeping pointers to f(::CT1), they are still valid. No risk induced.

Loading module TypePiracy will replace an entry of f’s method table. If @pure in MyModule caused keeping unsafe pointers to the old method from Base, they get invalid. If julia runtime decides to carbage collect the old method code, using such a pointer might crash the machine. High risk induced.

Is this the risk scenario of @pure? If all agree on it, we could refine the @pure doc and get rid off the current unsatisfactory situation, where the doc classifies almost any practical use of @pure as incorrect.

The confusion is not new news.

Longer version: the sense of “purity” which is being used in Julia’s internals is much stronger than the intuitive notion of purity because Julia’s method tables are stateful, so almost any computation you can do is technically impure because it depends on the state of various method tables – e.g. for the + or * functions. These can be changed and indeed often are. Since there’s no way to seal these method tables (yet), there are very few situations in which no potential future method definition could change the result of a computation – you can redefine something as basic as integer addition, after all. (I don’t recommend it because your program will crash almost immediately, but you can do it.) I’ve proposed that we start referring to this extreme sense of purity as “hyperpure” or something like that, to avoid some of the confusion this terminology has been causing.

Just as a counterbalance, improper @pure annotations can introduce bugs. The optimizations it enables rely on an extremely strict definition of pure. It really should be named something like @hyperpure . Some of the restrictions include:

  • It must always return exactly ( === ) the same result for a given input. Watch out for mutable types. I think constant globals are okay, though.
  • The function it’s used on cannot be further extended by other methods after it gets called.
  • It cannot recurse.
  • It’s undocumented and not exported (for good reason), but this means the complete list of preconditions is really only in a few people’s heads.

@pure macro - #3 by mbauman

Until the unexported @pure is renamed @hyperpure and a new more well groomed @pure is made … things are as they are.

At risk of being a bother:

  1. The @pure documentation focus on the problem of having generic functions being used inside the pure function (it does not mention that the pure function itself cannot be extended as you pointed out, for example). As generic methods are often used inside the pure function in Base which is the rationale used?
    1. If the type of the arguments to the inner generic function is always the same, is it ok? (Additionally, the inner generic function has to follow the same === rule you presented for the @pure function?)
    2. The inner generic function must not be extended at all (after the use in @pure) risking undefined behaviour otherwise, or it can be extended for anything that will not change which is the selected method in the context of the @pure function?
    3. The problem is just that the inferred selected method will be fixed in the context of the outer @pure function (i.e., dynamic dispatch will not be dynamic anymore) and, if such staticity is desired, the inner generic function may be extended without causing any crashes? (i.e., the @pure function will just display what would be the “wrong” result for the dynamic dispatch paradigm but otherwise no crashes?)
  2. Can I just take these list items you pointed here and make a PR to Julia documentation? So next time this question appears we can just point to the docs?

re (2) of course (and that is the best way to get more expert input)


Until we have a reworked @pure doc, this thread might help people whith questions around @pure.

I would like to rename the original post to “discussion on Base.@pure usage” and state, that my in initial question lead to that discussion. Henrique_Becker, do you agree?


I will create the PR later today. I have some compromises that need my attention first.

Maybe I should try to resurrect my PR where I proposed to rename @pure to @unsafe_pure.

Should I?


fwiw since that thread, @hyperpure seems to have caught the imagination of the community in place of @unsafe_pure.

Maybe wait to see what comments the doc PR gets first, just so your PR would come back with the most current shared thought on the topic.

I don’t consider myself a really good coder so I’ve come up with some personal rules in an effort to not misuse it in public code.

  1. The first rule of @pure functions is there are no @pure functions
  2. Only use stuff you can be sure no one will add methods to, so just use built in functions (Core.getfield).
  3. Only have one instance of the method you make and prefix it with “_” so people don’t decide to extend it.
  4. No recursion.

I’m not entirely sure, but the one exception to 2 might be iterating over tuples, because I’ve seen some methods in Base that use @pure for loop through all the elements of a tuple.

I create the PR (39954). If anyone involved in this discussion have a closer relationship with Core people that know how @pure works (like Jameson), and think they would like to contribute, it would be interesting to ping them.

Jameson is looking at your PR, and suggested improvements to the wording. I cannot edit your PR, however, I could paste a revision here, and you could update the PR. – If you prefer.

Yes, if it is convenient to you. I am not sure if I can give you permission to edit my PR. If I give you permission to edit my fork the changes appear in the PR? You have some workflow to suggest?