Value type dispatch on constants

This is best explained by an example.

type Test

import Base: getindex
getindex(s::Test,sym::Symbol) = getindex(s,Val{sym})
getindex(s::Test,::Type{Val{:test_sym}}) = reshape(view(s.x,1:9),3,3)

function test_dispatch(s)

function test_dispatch2(s)

s = Test(collect(1:10))
@code_llvm test_dispatch(s)
@code_llvm test_dispatch2(s)

The first gives

; Function Attrs: uwtable
define %jl_value_t* @julia_test_dispatch_63361(%jl_value_t*) #0 {
  %1 = call %jl_value_t*** @jl_get_ptls_states() #4
  %2 = alloca [5 x %jl_value_t*], align 8
  %.sub = getelementptr inbounds [5 x %jl_value_t*], [5 x %jl_value_t*]* %2, i64 0, i64 0
  %3 = getelementptr [5 x %jl_value_t*], [5 x %jl_value_t*]* %2, i64 0, i64 2
  %4 = bitcast %jl_value_t** %3 to i8*
  call void @llvm.memset.p0i8.i32(i8* %4, i8 0, i32 24, i32 8, i1 false)
  %5 = bitcast [5 x %jl_value_t*]* %2 to i64*
  store i64 6, i64* %5, align 8
  %6 = getelementptr [5 x %jl_value_t*], [5 x %jl_value_t*]* %2, i64 0, i64 1
  %7 = bitcast %jl_value_t*** %1 to i64*
  %8 = load i64, i64* %7, align 8
  %9 = bitcast %jl_value_t** %6 to i64*
  store i64 %8, i64* %9, align 8
  store %jl_value_t** %.sub, %jl_value_t*** %1, align 8
  %10 = getelementptr [5 x %jl_value_t*], [5 x %jl_value_t*]* %2, i64 0, i64 4
  %11 = getelementptr [5 x %jl_value_t*], [5 x %jl_value_t*]* %2, i64 0, i64 3
  store %jl_value_t* inttoptr (i64 2148817472 to %jl_value_t*), %jl_value_t** %3, align 8
  store %jl_value_t* %0, %jl_value_t** %11, align 8
  store %jl_value_t* inttoptr (i64 2240320304 to %jl_value_t*), %jl_value_t** %10, align 8
  %12 = call %jl_value_t* @jl_apply_generic(%jl_value_t** %3, i32 3)
  %13 = load i64, i64* %9, align 8
  store i64 %13, i64* %7, align 8
  ret %jl_value_t* %12

while the second gives

; Function Attrs: uwtable
define %jl_value_t* @julia_test_dispatch2_63423(%jl_value_t*) #0 {
  %1 = call %jl_value_t* @julia_getindex_63362(%jl_value_t* %0, %jl_value_t* inttoptr (i64 2240320304 to %jl_value_t*)) #1
  ret %jl_value_t* %1

I would’ve thought that the compiler would be able to infer what the value type would be since the value was a constant, but from this output it’s clear that it results in dynamic dispatch unless you explicitly have the value type as the constant.

Is there any fundamental reason why the compiler cannot infer this? It seems it should be able to when the values are constants. A quick macro would fix this, but it would be nice to have this “just work”.

No fundamental reason IMO. Currently, type iteration and constant propagation run to convergence and then inling is performed as a post-step.

The consequence is constants don’t propagate into other functions (mostly). So that’s a simple rule to learn as a user, at least. But with more compiler programming effort and potentially slower compilation we could have inling recurse with type inference and constant propagation to convergence. Keep in mind that these kind of interprocedural optimisations are rare in other programming languages.

The “mostly” part is that the “codegen” step of translating to LLVM might make use of constants, and LLVM optimisations themselves can kick in. For example true/false can filter into inline functions using ifelse(). Or LLVM will unroll certain loops in particular circumstances. None of this is possible without type stability.

You have good reason to have expected it to work. In similarly structured code, the right thing does happen without specific attention. It is an issue for you; post it, maybe others see it the same way.

Yeah, it sounds like it would be good to open an issue for this. I wanted to check if it was reasonable first since I’m not too familiar with compiler details and was just going on intuition that it “should work”.