There’s map(x->x+2, arr), which works nicely with both Array and static arrays. Is there a map_indices(ind->f(ind), arr), which constructs a similar array to arr?
You’re asking to create a fixed-size array similar to a given array (but distinct from the given array) at runtime, if I understand correctly? Without preallocating or having already created a SArray and passing that into the function, I don’t think it’s possible.
There is not a solution. The solution is to treat static arrays and scalars similarly (the immutables), and other AbstractArrays similarly (the mutables). If you then have one generic code for immutable and one for mutables then you can cover everything I’ve ran into so far. This is how DiffEq supports both BTW: it doubles each internal time step algorithm for these two forms.
I don’t clearly understand, if scalars are immutables, yet I can reassign it with another value. If an (fixed-sized) array is immutable, why i cannot reassign its elements with another values? Is there some confusion between immutability in terms of fized size, and in terms of array elements? And is there some link between heap /stack and mutable/immutable?
You can reassign it but not mutate it. With scalars, you cannot write a function change!(a) and expect it to change the value of a since it passes the value. Instead, you’d need to do a = change(a) for immutables. While for mutables in many cases change!(a) is more performant, so if you want to optimize both cases you need both forms. You could do things like ensure your mutating functions always return the value, so then the calling code could do a = change!(a), but then change! itself still needs at least two code paths to handle the very different behaviors that’s going on between the mutable and immutable arrays.
In Julia, yes, at least for now. That’s not always true in general. C and Fortran allow for stack-allocated mutable buffers for example. It’s just not available in Julia quite yet. And it’s not always true in Julia too, since some escape analysis can optimize some cases IIRC. And as the compiler continues to get smarter, this connection between mutability and heap allocations will only lesson.
No, it’s that the <4 x double> operations are SIMD operations computing 4 floating points at a time. If you wanted to hand write the assembly and knew to do SIMD, you’d write pretty much that, yet Julia does it automatically. That doesn’t mean that it’s the most optimized LLVM IR that’s possible, but it does mean it generates better LLVM IR than I could write myself, which makes me bow down to the compiler gods.