Efficiency and limits of varargs and "..."

I am getting into the habit of writing functions that take varargs, eg

foo(xs...) = reduce(some_op, map(some_function, xs))

and then either calling them as foo(1,2) for a small number of arguments, or foo(some_vector...) for iterables. I am wondering about the efficiency like this: in the second case, is there some intermediate form of the arguments that is constructed, or does some_vector go directly to the body? What about some intermediate case, eg foo(x, ys...)? Are there rules of thumb for when intermediate allocation happens?

I am not only asking about current optimizations, but about what should be considered idiomatic in the long run, even if it is not optimized fully in 0.5.0. Historically, some languages impose limits on varargs (eg not more than a few thousand etc), but I could not find this in the Julia manual, and did not run into limits in practice, so I thought I would ask.

Splatting vectors is definitely not as efficient or idiomatic as passing them directly. At a minimum, Julia will need to look up the number of elements in the vector and then dynamically dispatch to the proper signature. And naively, it’ll need to compile a new function for each new length. I’m not sure what the state of optimizations are, but others will be able to say more.

Moved to usage since this is about how to use the language.

I could write two versions, one for iterables and one for multiple arguments that could just call the first one, but the problem is that it is not possible to dispatch on something being iterable, so the compiler could not pick the one I want.

(sorry for posting in the wrong place, I thought that questions about internals belong there, thanks for moving)

It seems strange to want to reduce over an arbitrary number of arguments. Is this just because it is more convenient to write?

Yes, mainly. But I realize this may not be idiomatic Julia code.