What does @pure do to a function? Is it faster in some cases?

# @pure macro

I got rid of the â€śFirst Stepsâ€ť tag. This is not something that should be discussed as a first step into Julia. This is very much digging deeper.

An `@pure`

function is a function whose output is completely determined by its input. This means very many things. For one, no globals involved. Secondly, no pointers, since they are different in each session (this means no arrays). So something is `@pure`

essentially if itâ€™s a function of symbols, numbers, and booleans (other immutables, bitstypes, etc. which avoid pointers) that spits out some type which is (or only contains) these things.

It can make things faster. More correctly, it can help with inference. For example:

```
immutable Discrete{apply_map,scale_by_time} end
Discrete(;apply_map=false,scale_by_time=false) = Discrete{apply_map,scale_by_time}()
```

in this case, since the booleans are runtime variables, this will not actually be able to infer the output type without `@pure`

because the output type depends on the type parameters, and the type parameters depend on the runtime values of the variables `apply_map`

and `scale_by_time`

.

```
julia> @code_warntype Discrete()
Variables:
#self#::Type{Discrete}
Body:
begin
return ((Core.apply_type)(Main.Discrete,false,false)::Type{_<:Discrete})()::Discrete{apply_map,scale_by_time}
end::Discrete{apply_map,scale_by_time}
```

However, with `@pure`

the compiler does something like compile a separate version for each of the input types, and itâ€™s then able to properly infer the output type.

```
immutable Discrete{apply_map,scale_by_time} end
Base.@pure Discrete(;apply_map=false,scale_by_time=false) = Discrete{apply_map,scale_by_time}()
julia> @code_warntype Discrete()
Variables:
#self#::Type{Discrete}
Body:
begin
return $(QuoteNode(Discrete{false,false}()))
end::Discrete{false,false}
```

So in some very specific case, `Base.@pure`

will help inference.

**mbauman**#3

Just as a counterbalance, improper `@pure`

annotations can introduce bugs. The optimizations it enables rely on an *extremely* strict definition of pure. It really should be named something like `@hyperpure`

. Some of the restrictions include:

- It must always return exactly (
`===`

) the same result for a given input. Watch out for mutable types. I think constant globals are okay, though. - The function itâ€™s used on cannot be further extended by other methods after it gets called.
- It cannot recurse.
- Itâ€™s undocumented and not exported (for good reason), but this means the complete list of preconditions is really only in a few peopleâ€™s heads.

**ExpandingMan**#4

This really should appear in the documentation (along with the appropriate warnings about use). I have seen it around, and never known what the hell it did.

@mbauman about mutable types: itâ€™s okay to use mutable types in the intermediate computation if the output is immutable, right? The case I am thinking of is building an `SVector`

for the result, but using a `Vector`

to internally build that. Since

```
using StaticArrays
a = @SArray [1,2,3]
b = @SArray [1,2,3]
a === b # true
```

it seems like it would work, but I just wanted to double check. If so, that makes it much easier to build type-inferrable `SArray`

s in functions.

If thatâ€™s the case, then I believe a function like this should be `Base.@pure`

?

https://github.com/shivin9/PDEOperators.jl/blob/master/src/fornberg.jl#L34

**iamed2**#6

Iâ€™m on a quest to understand `@pure`

. I saw this, but then looking through `AxisArrays`

I saw:

```
@pure samesym{n1,n2}(::Type{Axis{n1}}, ::Type{Axis{n2}}) = Val{n1==n2}()
samesym{n1,n2,T1,T2}(::Type{Axis{n1,T1}}, ::Type{Axis{n2,T2}}) = samesym(Axis{n1},Axis{n2})
samesym{n1,n2}(::Type{Axis{n1}}, ::Axis{n2}) = samesym(Axis{n1}, Axis{n2})
samesym{n1,n2}(::Axis{n1}, ::Type{Axis{n2}}) = samesym(Axis{n1}, Axis{n2})
samesym{n1,n2}(::Axis{n1}, ::Axis{n2}) = samesym(Axis{n1}, Axis{n2})
```

Here the function is extended, but none of the following methods conflict with the `@pure`

one. Is it more correct to say that the method itâ€™s used on cannot be overloaded?

**Elrod**#8

Okay â€“ seeing this, I wanted to create a type-stable function for PCA using SizedArrays.

But it hasnâ€™t worked in a simple test case so far:

```
julia> using StaticArrays
julia> p = 30; n = 40;
julia> S = randn(n, p) |> x -> x' * x;
julia> eigS = eigfact(S);
julia> Î»s = SVector{p}(cumsum(eigS.values));
julia> Î»s /= Î»s[end];
julia> E = SizedArray{Tuple{p,p}}(eigS.vectors);
julia> Base.@pure deduce_rank(x::SVector{p,<:Real}, ::Type{Val{g}}) where {p,g} = Val{p-searchsortedlast(x, 1-g)}
deduce_rank (generic function with 1 method)
julia> function f(x::SizedArray{Tuple{p,p}}, ::Type{Val{q}}) where {p,q}
SizedArray{Tuple{p,q}}(x[:,p-q+1:end])
end
f (generic function with 1 method)
julia> function h(X::SizedArray, x::SVector,::Type{Val{g}}) where g
v = deduce_rank(x, Val{g})
f(X, v)
end
h (generic function with 1 method)
julia> h(E, Î»s, Val{0.9});
julia> typeof(ans)
StaticArrays.SizedArray{Tuple{30,18},Float64,2,2}
julia> @code_warntype h(E, Î»s, Val{0.9})
Variables:
#self#::#h
X::StaticArrays.SizedArray{Tuple{30,30},Float64,2,2}
x::SVector{30,Float64}
#unused#::Any
v::Type{Val{_}} where _
Body:
begin
$(Expr(:inbounds, false))
# meta: location REPL[8] deduce_rank 1
SSAValue(0) = $(Expr(:invoke, MethodInstance for searchsortedlast(::SVector{30,Float64}, ::Float64, ::Base.Order.ForwardOrdering), :(Base.Sort.searchsortedlast), :(x), :((Base.sub_float)((Base.sitofp)(Float64, 1)::Float64, 0.9)::Float64), :(Base.Sort.Forward)))
# meta: pop location
$(Expr(:inbounds, :pop))
v::Type{Val{_}} where _ = (Core.apply_type)(Main.Val, (Base.sub_int)(30, SSAValue(0))::Int64)::Type{Val{_}} where _ # line 3:
return (Main.f)(X::StaticArrays.SizedArray{Tuple{30,30},Float64,2,2}, v::Type{Val{_}} where _)::Any
end::Any
```

Inference on deduce rank failed.

Any suggestions?

I have a function that does something like this before doing >1,000 matrix operations with the result.

The cost of a dynamic dispatch is small in comparison to the benefit of using a sized array.

But, would be great to find out a way to dodge the dynamic dispatch too.

StaticArrays may not be sufficiently pure?

EDIT: You (and dextorious) showed me how to do those matrix operations much more rapidly with BLAS for anything but smallish dimensions (for which thereâ€™s no need for LDR).

So I donâ€™t need this, but still interesting.