appending an array takes O(1) amortized time up to a certain size though as far as I know. And O(1) time if you preallocate using sizehint!(). The amortization over many insertions works out to O(n) because they double the size of the array… But after a certain size around 1% of total ram, they stop doing this.
I think “appending” here actually means vcat, because Zygote doesn’t support push! or append! unless you use Buffer (which few know about and isn’t advertised because it has pretty poor performance).
That said, the original question still doesn’t make sense to me because array[i] = ... and vcat(array1, array2) don’t really have much overlap, and so the latter wouldn’t usually be a workaround for the former. It would help to have more detail about what it is that requires mutation to see whether there are actual workarounds (e.g. using map or array comprehensions). Failing that, ReverseDiff is another, non-experimental option that supports array mutation natively and ForwardDiff may even be a contender depending on problem size.
I think it keeps being O(1) amortized, since in the limit (I see from code comment):
we end by adding about 10% of memory each time
[I.e. of the array size, not of RAM size, so no longer ever adds a constant amount… That said, if you use more than physical RAM and start thrashing, using virtual memory, then at least with old hard disks, since they are not “random-access” it’s no longer O(1), with SSDs, O(1) may still hold…]
That’s at least with 1.8, I do NOT see that this was backported to 1.6 LTS so O(n²) behavior still there: