Is there a way to "forward" getindex for Tuples in a type-stable way?


I have some data structure which has a field which is a Tuple, and I want to forward getindex called on my data structure to this tuple, while maintaining type stability. From what I can tell, there is some magic happening somewhere that makes tuple indexing type-stable in the first place, which isn’t happening for my case. Here’s what I mean,

immutable TupleWrapper{A,B}
    data :: Tuple{A,B}

import Base: getindex
getindex(t::TupleWrapper, i) = getindex(,i)

using Base.Test
foo(x) = x[1]
@inferred foo((1.,2)) # is OK
@inferred foo(TupleWrapper((1.,2)))
> ERROR: return type Float64 does not match inferred return type Union{Float64,Int64}
 in error(::String) at ./error.jl:21

Is there any way to do this?


Don’t use getindex with runtime integers i (x[i]) if you want it to be type-stable. Instead, you need all of the those integers to be known at compile-time. So

for i = 1:3

are not type stable calls for an inhomogenous tuple, while

x[1]; x[2]; x[3]

is. Two ways to handle this. One is a metaprogramming: generated functions work nicely. But another way is recursion. I’m thinking of writing a blog post to describe it in full, but an example of how to do this is found here:

In this example, apply_discrete_callback!(integrator,callback) needs to be called on every callback in a tuple (and each callback is differently typed). So the basic call is:


This then gets “caught” by function headers like


where callback is the first callback in the list and args is the rest. You can then recrusively do something on callback and pass args into apply_discrete_callback!, and it’ll get smaller an smaller. Manage the Base cases and you’re good: no getindex was ever needed.

Then, for this code, the compiler will inline this into one giant call. It essentially makes it amount to:


knowing what the index is at compile time and thus able to infer each call and be type-stable. The only cost here is the compile time, but if you’re keeping the tuples around and repeatedly calling this, it’ll only have to compile this full recursive routine once for the the tuple.


Thanks for the reply. Yea, I am definitely not trying to do this with run-time integers, where I’d expect it to be hopeless. But like you see above, while x[1]; x[2]; x[3] is type-stable for an inhomogenous Tuple, it’s not type-stable for my TupleWrapper, even though I’m just forwarding to the underlying Tuple function. This is surely because by the time i makes it to my getindex function, its a run-time integer, but I’m wondering if there’s a way to preserve the functionality normal Tuples have with explicit constant indexing like [1], [2], etc…?

Your recursive example is nice for the case where you want to loop over everything (here I just want index certain things by hand). A blog post about it could be nice, I think its similar the way broadcast is kept type stable, and definitely reading that code was a big “aha” moment for me.


i can’t try now, but maybe it works:

getindex{N}(t::TupleWrapper, ::Val{N}) =[N]
@inline getindex(t::TupleWrapper, n) = getindex(t, Val{n}())


Oh I see. My bad! I read the problem wrong. I saw a runtime value used for tuples (in the getindex function) and a type-stability issue and took off on that :slight_smile:. But yeah…

Looks like that works by inlining a getfield call? I’m not sure why your getindex wouldn’t work if theirs does, unless it needs another compiler pass or something. I tried adding @inline to yours to see if it would do anything… nope.

That infers Any.


If I understand correctly, inlining can never help with type stability, because it’s done after the type inference step ( It’s a significant issue for my code, too… I don’t think there’s a solution to your problem, except to use Val{i} for the index. In general you can use a macro to avoid the problem, but it doesn’t work because you need dispatch.


Yep, this one comes up a fair bit in my work. The current solution is a macro or a Val or your own defined indexing type. Recursion and generated functions work well with map and reduce-style functions.

To me, it seems a bit of a pity that inlining happens strictly after type inference. My understanding is that it wouldn’t be trivial to implement inlining as a part of the inference loop and have it be efficient (compilation would become slower). OTOH I’m not aware of any other language that implements interprocedural constant propagation, so perhaps that is fair.

But some solution to this problem would open up APIs currently not possible for these kind of containers.



I am having exactly the same problem. I want to index a container of a tuple in a type-stable way. There have been many suggestions here on how to do this, but I am a beginner.

Is there a way I can get some info on how to do this "with a macro or with Val" ?

I have tried the

getindex{N}(t::TupleWrapper, ::Val{N}) =[N]
@inline getindex(t::TupleWrapper, n) = getindex(t, Val{n}())

approach but it doesn’t work. So now I am looking on doing it with a generated function
@generated function getmyfield . Is there any tutorial on that?

I tried the extremely naive:

@generated function getobstacle(BT::BilliardTable{T, S}, i::Int) where {T,S}
    return :([i])

but it didn’t work! I would appreciate some hints in the right direction.

getindex{N}(t::TupleWrapper, ::Val{N}) =[N]
@inline getindex(t::TupleWrapper, n) = getindex(t, Val{n}())

The first definition is right, but the second isn’t type-stable. You have to use mywrapper[Val{N}] in every call. See


Thanks for the answer. I have done the following, which work so far, after many different combinations of tests:

immutable BilliardTable{T, BT<:Tuple}

ObstInd(i::Int) = SVector{1, Val{i}}(Val(i))

getobstacle(bt::BilliardTable{T,S}, ::Val{N}) where {T,S,N} =[N]
getobstacle(bt::BilliardTable{T,S}, ::SVector{1, Val{N}}) where {T,S,N} =[N]

Both of the above getobstacle functions pass the @inferred test. In the second case I have to explicitly first construct an SVector by calling ObstInd(3) and then call getobstacle.

Now, when I do:
getobstacle(bt::BilliardTable{T,S}, N::Int) where {T,S} = getobstacle(bt, ObstInd(N))
it doesn’t infer anymore.

I am guessing that this is what you meant by passing mywrapper in every call. Is there any super efficient wrapper I can create myself? thus far I am using an SVector{1} but maybe it can be done better?..

Thanks for the help though!


Also, how can this be done with generated functions? Is there any code existing that already does it and I can look at? By myself I didn’t have much progress.


A type stable function has, for each argument types, a single concrete return type. For example, sin(::Int) always returns a Float64, and so it is type-stable. Your ObstInd(i::Int) function cannot be type-stable, because depending on the value of i, it can return an object of type SVector{1, Val{1}}, SVector{1, Val{2}}, etc. You could fix this by rewriting ObstInd as ObstInd{N}(::Val{N}) = ...


The function: ObstInd(::Val{N}) where {N} = SVector{1, Val{N}}(Val(N)) passes the @inferred test.
However, using:

getobstacle(bt::BilliardTable{T,S}, N::Int) where {T,S} = 
getobstacle(bt, ObstInd(Val(N)))

Fails. The inferred type is Any.

EDIT: I can see why this last function getobstacle(bt, N::Int) is type unstable!
Is there any way to fix it / make it work?


You have to pass Val{N} and ObstInd{Val{N}} objects everywhere (*) instead of Ints.

(*) everywhere that’s performance-sensitive.


Great. I have done it, and now my huge function that indexes that Billiard thing is type stable.

However, it is 1000 times slower than using simply a Vector (instead of a Wrapper around a Tuple) even though the vector case has major inference issues.

Is this because of using Val{i} in all of the innermost loops of a big (and expensive) function?


even though the vector case has major inference issues.

What are those? What’s in your vector/tuple?


My basic type is Obstacle{T} where {T<:AbstractFloat}. This is abstract.

My vector is Vector{Obstacle{T}} and inside contains numerous concrete objects of types Wall{T} and Disk{T}. I am constantly calling functions:


etc. for all i in eachindex(bt), with bt being this Vector{Obstacle{T}}. All of these functions operate with the concrete types Wall, Disk, each one is strictly typed.

Of course bt[i] is not type stable and so I have the inference issues here. I get around most of that, by declaring the local variables in my functions. E.g. in my code I write:
tmin::T = collisiontime(bt[i]) and similarly everywhere else.

For the change to “tuples” I created a wrapper

immutable BilliardTable{T, BT<:Tuple}

getobstacle(bt::BilliardTable{T,S}, ::Val{N}) where {T,S,N} =[N]

I create a tuple out of all the elements of the previous vector bt and I call that BT. Now,
instead of having the non-inferable bt[i], I replace this everywhere with getobstacle(BT, Val(i)). Pretty much everything else is left untouched.

When I benchmark the big function that encloses all these smaller functions, it is 1000 slower when using BT instead of bt.


Are you only making i constants? This is about constants, not loops or anything like that. Of course when it’s not constant it won’t be type-inferrable.


Oh damn it! no it’s not a constant it is the variable of a for loop.

So what I am trying to do is not possible? To access a container that has more than 1 concrete Type in a type-stable way, when iterating trough the container as a loop?


It’s not directly possible, since there’s no way for the compiler to know the type of each element if you’re just iterating through them in a loop. But there are a lot of other options to make this work:

(this isn’t specific to having a wrapper around a tuple–you’ll have the same issues with regular Julia tuples too). All of these make it possible to call a function on a mixture of types in an inferable way, without necessarily implementing a type-stable getindex().

Another option would be to store FunctionWrappers instead of the objects themselves in order to create a concretely-typed collection.

Type instability if I map over array of functions