Because the possibility of throwing an error is an observable sideeffect of the computation. It’s pretty difficult to “roll back” other affected vectorized state that may have been computed as a result of the vectorization (and possibly reordering) when any one of the results would result in an error.
Not necessarily. Rust for example prevents these kinds of data races by design, because they can lead to pretty gnarly vulnerabilities. Any time you encounter a segfault in a program, you’re more or less encountering an opportunity for a security exploit, typically through an out-of-bounds write that can often be leveraged for arbitrary code execution, if an attacker is motivated enough.
Per se memory safe languages like Julia generally try to prevent this by having more high level memory management in the form of a GC. This can provide spatial safety (tracked/live objects can’t alias) but is not enough to give full memory safety in the presence of data races. Rust goes one step farther and uses its borrow checker to ensure there are no data races in your code (barring use of unsafe
blocks).
That’s why this is so insidious: it won’t manifest until some other task has access to the input vector and resizes it while you are loading some data from the old memory location. pop!
/push!
themselves try to detect this (it won’t catch everything), but that can’t do anything about existing tasks reading from the old object. That’s why I’m saying that the optimizer really shouldn’t vectorize here, since it has no knowledge about whether the memory location of the vector changed since the last iteration.