The macros need to get comprehension variables from the global scope. It’s for inferring the size parameter at parse-time for better performance more easily. It’s documented so it’s stuck that way. Use the non-macro constructors and set the size yourself, preferably statically but not possible in this case.
Since @SVector is a macro and macros are evaluated at parse time (and n is not available at parse time), you get this error. It would only work if you used a parse-time constant range like 1:3.
But there are numerous alternatives available, including
julia> @SVector [i for i in 1:3] # works with the literal 3, but not a variable
3-element SVector{3, Int64} with indices SOneTo(3):
1
2
3
julia> SVector{3}(i for i in 1:3)
3-element SVector{3, Int64} with indices SOneTo(3):
1
2
3
julia> SVector{3}(1:3)
3-element SVector{3, Int64} with indices SOneTo(3):
1
2
3
Note that the function f(n) = SVector{n}(1:n) is not type stable because the length n is a run time value (not compile time constant). So you are likely to see very poor performance if you use this function. You want the length to be brought into the type domain so that it is a compile-time constant. You might want to re-think exactly how you’re using this.
Thank you both for your replies. But just to make sure, the length n being a run time value would make it non type stable for any type of stack allocated object, right? Because initially I was trying to just use ntuples but wasn’t able to get rid of Any’s and other red instances in @code_warntype. So, I guess I can’t avoid heap allocations for run time values.
SVectors are no different from NTuple in this regard: yes, the length and type information needs to be told to or inferable by the compiler to avoid boxing the value (sometimes not a big deal and other times costing considerable performance).
Often, I find that the size information is lying around and I just need to get it from an appropriate place. Like if I’m simulating the path of an object, I know whether it’s in 1D, 2D, or 3D space based on the length of its state (also an SVector).
There are ways to resolve this. Sometimes you can use a function barrier to mitigate a type instability. Another option (although sometimes tricky to do correctly) is to pass the dimension in the type domain. If you read those resources and still struggle with how to do them, post a small example and people can help you work it out.
We don’t have direct control over what is stack-allocated or not. I suppose high-level languages don’t usually directly control where memory goes, but some languages have semantics that give you so much indirect manual control you practically know exactly where memory goes. Julia has some indirect control too, but memory allocation in a garbage collected language gives more decisions to the compiler.
The size of the SVector coming from the value of an argument usually means it’s only known during the method’s runtime, which we can benchmark to see some heap allocations:
julia> f(n) = SVector{n, typeof(n)}(1:n)
f (generic function with 1 method)
julia> @btime f($3);
1.600 ÎĽs (11 allocations: 512 bytes)
However, the call itself could be taking a compile-time value, including constants, so if the method is small enough to be inlined (or you suggest it with @inline), some of its work can be done at the caller method’s compile-time. That can elide heap allocations, especially for immutables, which we can benchmark:
julia> g(::Val{n}) where n = f(n) # input static parameter
g (generic function with 1 method)
julia> @btime g($(Val(3)));
1.100 ns (0 allocations: 0 bytes)
julia> g() = f(3) # input constant
g (generic function with 2 methods)
julia> @btime g();
1.100 ns (0 allocations: 0 bytes)
Optimizing some work to compile-time doesn’t mean the called method as a whole runs at compile-time; for example, if you have a print statement, it’ll still execute every call.