The second one is a function call with 10^7 arguments while the first one is a function call with one argument. I don’t know exactly how the compiler even deals with the second case but it’s intuitively clear that it’s more difficult to optimize, no?
The general rule of thumb for splatting is not to use it if you don’t know the length of the thing you’re splatting at compile time/the compiler doesn’t know it (as is the case for Vector). That’s the reason why we have both min and minimum, max and maximum and so on.
I assume the compiler can still kind of deal with all those arguments because it knows they’re of the same type. Like in Vararg{Int}. But I assume if that pattern was broken (let’s say Float64 and Int interleaved for 10e7 arguments, it would be worse.
What if you know at compile time that the length is 10^7? Suppose it’s a static range. Wouldn’t it still be a bad idea to try to compile a method with that many arguments?