Getindex, to_indices, and CartesianIndex type inference failure

I’m defining a multi-dimensional Hankel array, as follows:

struct Hankel{T,N,V<:AbstractArray{T},D<:Tuple{Vararg{Int}}} <: AbstractArray{T,N}
    v::V
    d::D
    function Hankel(v::AbstractArray, d::Tuple{Vararg{Int}})
        Δ = length(d)
        @assert Δ ≤ ndims(v)
        @assert all(0 .≤ d .≤ size(v)[1:Δ])
        return new{eltype(v), Δ + ndims(v), typeof(v), typeof(d)}(v, d)
    end
end

Hankel(v::AbstractArray, d::Int...) = Hankel(v, d)

Base.IndexStyle(::Type{<:Hankel}) = IndexCartesian()

function Base.size(A::Hankel)
    Δ = length(A.d)
    return (A.d..., (size(A.v)[1:Δ] .- A.d .+ 1)..., size(A.v)[(Δ + 1):end]...)
end

@inline function Base.getindex(A::Hankel{T,N}, I::Vararg{Int,N}) where {T,N}
    @boundscheck checkbounds(A, I...)
    Δ = length(A.d)
    j, k, b = I[1:Δ], I[(Δ + 1):(2Δ)], I[(2Δ + 1):end]
    return A.v[(j .+ k .- 1)..., b...]
end

This seems to work fine. However I’m having a weird type-instability issue when I try to index this with CartesianIndex.

using Tests
N = (5,8,4); d = (3,4,2); B = (7,9)
v = randn(N..., B...);
A = @inferred Hankel(v, d) # this is fine
j = rand(CartesianIndices(d));
k = rand(CartesianIndices(N .- d .+ 1));
b = rand(CartesianIndices(B));
@inferred A[j,k,b] # fails

Can someone help diagnosing this?

Actually the issue seems to originate in Base.to_indices.

julia> @code_warntype A[j,k,b]
MethodInstance for getindex(::Main.Hankel{Float64, 8, Array{Float64, 5}, Tuple{Int64, Int64, Int64}}, ::CartesianIndex{3}, ::CartesianIndex{3}, ::CartesianIndex{2})
  from getindex(A::AbstractArray, I...) in Base at abstractarray.jl:1215
Arguments
  #self#::Core.Const(getindex)
  A::Main.Hankel{Float64, 8, Array{Float64, 5}, Tuple{Int64, Int64, Int64}}
  I::Tuple{CartesianIndex{3}, CartesianIndex{3}, CartesianIndex{2}}
Body::Any
1 ─      nothing
│   %2 = Base.IndexStyle(A)::Core.Const(IndexCartesian())
│   %3 = Core.tuple(%2, A)::Tuple{IndexCartesian, Main.Hankel{Float64, 8, Array{Float64, 5}, Tuple{Int64, Int64, Int64}}}
│        Core._apply_iterate(Base.iterate, Base.error_if_canonical_getindex, %3, I)
│   %5 = Base.IndexStyle(A)::Core.Const(IndexCartesian())
│   %6 = Core.tuple(%5, A)::Tuple{IndexCartesian, Main.Hankel{Float64, 8, Array{Float64, 5}, Tuple{Int64, Int64, Int64}}}
│   %7 = Base.to_indices(A, I)::Tuple{Int64, Int64, Int64, Int64, Vararg{Any}}
│   %8 = Core._apply_iterate(Base.iterate, Base._getindex, %6, %7)::Any
└──      return %8

But there is no mention about implementing to_indices for custom arrays in Interfaces · The Julia Language.

In this case it seems type inference is just giving up.
In fact, Base.to_indices(A, (j, k, b)) is just calling Base.to_indices(A, (), (j, k, b)).

But Base.to_indices(A, (j, k, b)) does not infer, while Base.to_indices(A, (), (j, k, b)) infers.

I think I just need to add some method here to shortcut this chain of calls to simplify the job of type inference?

Here is a simplified version of this:

sz1 = (3,2)
sz2 = (5,5)
sz3 = (3,2)
sz4 = (2,)
A = randn(sz1..., sz2..., sz3..., sz4...);
i1 = rand(CartesianIndices(sz1))
i2 = rand(CartesianIndices(sz2))
i3 = rand(CartesianIndices(sz3))
i4 = rand(CartesianIndices(sz4))
using Test
@inferred A[i1,i2,i3,i4] # fails

I opened an issue: https://github.com/JuliaLang/julia/issues/44059