For example, take the addition of matrix A and matrix B, yielding a matrix C. Let’s say A is 0-based and B is 1-based. This isn’t one of those offset indices applications where aligning A[1,1] and B[1,1] makes any sense, so we basically do
A.parent + B.parent
. But what should C’s indices be? 0-based, 1-based?
That’s an extremely dangerous way to think about the problem. Why does aligning indices not make sense? I expect (A + B)[i,j] = A[i,j] + B[i,j]
(“location matters”). Indexing needs to be guided by a few axioms, the other key one being that a[indxs][j] = a[indxs[j]]
implies that “the indices of the indices become the indices of the subset” (i.e., the indices of indxs
become the indices of a[indxs]
). We have good support for such axioms, except of course when people don’t implement them “properly.” Of course you can write a method sum_without_worrying_about_indices
yourself, but that’s not what +
should do.
The right answer to your question is, throw an error. And we do:
julia> A + B
ERROR: DimensionMismatch: dimensions must match: a has dims (Base.OneTo(3), Base.OneTo(4)), b has dims (OffsetArrays.IdOffsetRange(values=0:2, indices=0:2), OffsetArrays.IdOffsetRange(values=2:5, indices=2:5)), mismatch at 1
Stacktrace: