Inferable unpacking of Tuple type to tuple of types

In one of my packages, I used the following paradigm to unpack a Tuple{T1,T2,...} into a tuple of the types

unpack(::Type{Tuple{}}) = ()
unpack(::Type{T}) where {T<:Tuple} = (Base.tuple_type_head(T), unpack(Base.tupe_type_tail(T))...)

which I could then use to compute some property thereof, and the end result was completely inferred up to Julia 1.6.x. So as an example (not the actual code that I want to do)

function test(x::Tuple)
     types = unpack(typeof(x))
     return promote_type(types)

@code_warntype test((1,2.,3//4))

This worked fine with all of the recent Julia versions. With Julia 1.7.DEV, the result is no longer inferred. Adding @inline in the unpack definition does not seem to help. The only thing that helps is adding Base.@pure in front of the unpack definition. However, given all the warning regarding the use of @pure, I actually removed all my use cases of it and was happy to see that things were working without it.

Is this a valid use case for @pure? Or a bug that it is no longer inferred? Or is there a better paradigm for doing something like this altogether?

You can perhaps use a generated function?
Edit: Apparently the function below constant folds even without the @generated.

julia> @generated unpack(::Type{T}) where {T <: Tuple} = Tuple(T.parameters)
unpack (generic function with 1 method)

julia> @code_native unpack(typeof((1, 1.0, 3//4)))
	.section	__TEXT,__text,regular,pure_instructions
; ┌ @ REPL[9]:1 within `unpack`
	movq	%rdi, %rax
	movabsq	$4542686688, %rcx               ## imm = 0x10EC3E5E0
	movq	(%rcx), %rcx
	movq	%rcx, 16(%rdi)
	movabsq	$4542686672, %rcx               ## imm = 0x10EC3E5D0
	vmovups	(%rcx), %xmm0
	vmovups	%xmm0, (%rdi)
	nopw	(%rax,%rax)
; └
1 Like

Yes, I guess @generated would be another solution. I used to be a big fan of @generated functions, but because of the warnings about their limitations I started to shy away from them in favour of recursive definitions and tuple manipulations like in my unpack implementation. So indeed, I would like to know what the “recommended” approach is.