Computing tuple replacements

I have some code to compute a tuple replacement

function set(tuple::Tuple{T1, T2}, t, i::I) where {T1, T2, I}
    if i == 1
        (t, tuple[2])
    elseif i == 2
        (tuple[1], t)

function set(tuple::Tuple{T1, T2, T3}, t, i::I) where {T1, T2, T3, I}
    if i == 1
        (t, tuple[2], tuple[3])
    elseif i == 2
        (tuple[1], t, tuple[3])
    elseif i == 3
        (tuple[1], tuple[2], t)

Is this as good as it gets (I’m considering performance, not beauty)?
As always: thanks!

What about:

julia> set(t::Tuple,i,val) = ntuple(j -> j == i ? val : t[j], length(t))
set (generic function with 1 method)

julia> tup = (1,2,3);

julia> set(tup,2,0)
(1, 0, 3)

julia> @btime set($tup,2,0)
  0.022 ns (0 allocations: 0 bytes)
(1, 0, 3)

I don’t think you need all those type annotations, because the tuple is a specialized for each type, anyway.

edit: you can also use:

julia> using Setfield

julia> tup = (1,2,3)
(1, 2, 3)

julia> @set! tup[2] = 0
(1, 0, 3)

(I would not bet in any difference in performance for any of these options, including yours)


First test: I dropped you proposal into my test bed

function set(tuple::Tuple, t, i::I) where I
    ntuple(j -> j == i ? t : tuple[j], length(tuple))

Performance approximately identical, better to extend: +1

1 Like

The critical point for performance is having the length of the output tuple to be known at compile time. That is why ntuple is a good choice, and it works for input tuples or static arrays because the function will specialize to those, and their lengths are part of their types.

1 Like

Second test:

function _set(tuple::Tuple{T1, T2}, t, i::I) where {T1, T2, I}
    @set! tuple[i] = t

function _set(tuple::Tuple{T1, T2, T3}, t, i::I) where {T1, T2, T3, I}
    @set! tuple[i] = t

Performance slightly worse. When manually inlining the functions to improve the effect of the macros I run into some trouble. Have to read about Setfield.jl and investigate…

Thanks a lot!

You can also manually code a recursive solution:

using Base: tail

function rset((x, ys...)::Tuple, t, i::Integer)
    if i == 1
        return (t, ys...)
        return (x, rset(ys, t, i-one(i))...)

In terms of performance, it looks like it’s the same as yours and const-folds if the arguments are known in advance.

1 Like

This already exists in Base.

help?> Base.setindex
  setindex(c::Tuple, v, i::Integer)

  Creates a new tuple similar to x with the value at index i set to
  v. Throws a BoundsError when out of bounds.


  julia> Base.setindex((1, 2, 6), 2, 3) == (1, 2, 2)


  setindex(nt::NamedTuple, val, key::Symbol)

  Constructs a new NamedTuple with the key key set to val. If key
  is already in the keys of nt, val replaces the old value.

  julia> nt = (a = 3,)
  (a = 3,)
  julia> Base.setindex(nt, 33, :b)
  (a = 3, b = 33)
  julia> Base.setindex(nt, 4, :a)
  (a = 4,)
  julia> Base.setindex(nt, "a", :a)
  (a = "a",)

And we of course can see how it was implemented:

It is with the ntuple approach but with ifelse, probably to allow simd.


Probably because the apparent “mutation” is somewhat misleading. The tuple is immutable, thus you are always doing

new_tup = (old_tup[1]...)

When using

tup = @set tup[1] = 0

or even

@set! tup[1] = O

This last one is confusing if used inside a function, it does not mutate the original tuple.

1 Like

For reliable high runtime performance for large-ish Tuples, your best option is likely a generated function though.

julia> @generated function set(t::Tuple{Vararg{Any, N}}, x, i) where {N}
           Expr(:tuple, (:(ifelse($j == i, x, t[$j])) for j in 1:N)...)
set (generic function with 1 method)
julia> foreach(4:8:36) do n
           t = Ref(ntuple(_-> rand(), n))
           @show n
           x = Ref(rand())
           i = n - 2
           ir = Ref(i)
           println("literal index");
           @btime set($t[], $x[], $i)
           @btime Base.setindex($t[], $x[], $i)
           println("Variable index")
           @btime set($t[], $x[], $ir[])
           @btime Base.setindex($t[], $x[], $ir[])
n = 4
literal index
  3.099 ns (0 allocations: 0 bytes)
  4.379 ns (0 allocations: 0 bytes)
Variable index
  2.839 ns (0 allocations: 0 bytes)
  4.379 ns (0 allocations: 0 bytes)

n = 12
literal index
  4.123 ns (0 allocations: 0 bytes)
  6.435 ns (0 allocations: 0 bytes)
Variable index
  3.864 ns (0 allocations: 0 bytes)
  6.435 ns (0 allocations: 0 bytes)

n = 20
literal index
  11.039 ns (0 allocations: 0 bytes)
  13.342 ns (0 allocations: 0 bytes)
Variable index
  11.039 ns (0 allocations: 0 bytes)
  13.342 ns (0 allocations: 0 bytes)

n = 28
literal index
  13.092 ns (0 allocations: 0 bytes)
  15.394 ns (0 allocations: 0 bytes)
Variable index
  13.092 ns (0 allocations: 0 bytes)
  15.394 ns (0 allocations: 0 bytes)

n = 36
literal index
  15.906 ns (0 allocations: 0 bytes)
  54.089 μs (1510 allocations: 34.28 KiB)
Variable index
  15.906 ns (0 allocations: 0 bytes)
  55.389 μs (1510 allocations: 34.28 KiB)

Once a tuple gets the 32-element recursion limit, your only real hope is a generated function. It’ll save orders of magnitude of performance (though they have other. tradeoffs)


I shortly looked into Setfield.jl and realized that it does some funky Haskellish stuff. I imagine it is more suited for nested immutable data, which I currently do not have.

1 Like

Thanks a lot, Mason.

For now I settled with with Base.setindex, which is ‘good enough’ and easier to manage for me.

On a related note: I noticed

  • tuple destructuring vs. tuple indexing has a subtle impact on performance (probably depending on the number of elements you are working with)
  • measuring small performance optimizations with @btime is difficult due to around 5% variability between runs even if I GC.gc() in between.

It is, though I would be surprised with any performance difference relative to these alternatives (except of course the generated functions for large tuples - although if that was the case, probably one should be considering using preallocated vectors instead). Note that benchmarking these functions can be tricky, because you have to avoid the compiler from constant propagate stuff or cache the values. That is why in the benchmark above mason used all those Ref values.

julia> using BenchmarkTools, Setfield

julia> x = (1,2,3); xr = Ref(x);

julia> v = 0; vr = Ref(v);

julia> i = 2; ir = Ref(i);

julia> @btime Base.setindex($xr[],$vr[],$ir[])
  4.389 ns (0 allocations: 0 bytes)
(1, 0, 3)

julia> set(x,v,i) = @set! x[i] = v
set (generic function with 1 method)

julia> @btime set($xr[],$vr[],$ir[])
  4.352 ns (0 allocations: 0 bytes)
(1, 0, 3)

1 Like

Addendum: unfortunately it seems that Base.setindex() is only type stable for homogeneous tuples. A counterexample would be

Base.setindex((1, Int8(2)), 3, 1)
@code_warntype Base.setindex((1, Int8(2)), 3, 1)


(3, 2)
  x::Tuple{Int64, Int8}

Body::Tuple{Int64, Union{Int64, Int8}}
1 ─       goto #8 if not $(Expr(:boundscheck))
2 ─ %2  = (1 <= i)::Bool
└──       goto #4 if not %2
3 ─ %4  = Base.length(x)::Core.Const(2)
│         (@_5 = i <= %4)
└──       goto #5
4 ─       (@_5 = false)
5 ┄ %8  = @_5::Bool
└──       goto #7 if not %8
6 ─       goto #8
7 ─ %11 = Base.BoundsError(x, i)::Any
└──       Base.throw(%11)
8 ┄       nothing
│   %14 = Core.tuple(v, i)::Tuple{Int64, Int64}
│   %15 = Core._apply_iterate(Base.iterate, Base._setindex, %14, x)::Tuple{Int64, Union{Int64, Int8}}
└──       return %15

I can’t assess if this is expected behaviour or an issue?

This can’t possibly be type stable for a non-homogenous Tuple if the index isn’t known at compile time.

In your example, the index could by any number, so it can’t be known what type will end up where.

But getproperty on struct works AFAIU, isn’t that the same situation?

it works because the property is usually statically known.


julia> @code_warntype getproperty(Foo(1, "hi"), :y)
MethodInstance for getproperty(::Foo, ::Symbol)
  from getproperty(x, f::Symbol) in Base at Base.jl:42
Body::Union{Int64, String}
1 ─      nothing
│   %2 = Base.getfield(x, f)::Union{Int64, String}
└──      return %2


julia> get_y(f::Foo) = getproperty(f, :y)
get_y (generic function with 1 method)

julia> @code_warntype get_y(Foo(1, "Hi"))
MethodInstance for get_y(::Foo)
  from get_y(f::Foo) in Main at REPL[3]:1
1 ─ %1 = Main.getproperty(f, :y)::String
└──      return %1
1 Like

We can do the same thing with your setindex example.

julia> set1(t, x) = Base.setindex(t, x, 1)
set1 (generic function with 1 method)

julia> @code_warntype set1((1, Int8(2)), 3)
MethodInstance for set1(::Tuple{Int64, Int8}, ::Int64)
  from set1(t, x) in Main at REPL[7]:1
  t::Tuple{Int64, Int8}
Body::Tuple{Int64, Int8}
1 ─ %1 = Base.setindex::Core.Const(Base.setindex)
│   %2 = (%1)(t, x, 1)::Tuple{Int64, Int8}
└──      return %2

An important thing to understand here is that when you do

@code_warntype f(x, y)

that is just a nice syntax for actually writing

code_warntype(f, Tuple{typeof(x), typeof(y)})

That is, it only knows about the types of the function call, not the values. This is why I don’t really like @code_warntype and think it is misleading (though it’s often useful)

1 Like

Related discussion of @code_warntype here:

1 Like