This will return a 1-element array. You could get a scalar out by using an Adjoint
vector instead via [-1,1]'*[extrema(itr)...]
(see e.g. @jiahao’s talk on the meaning of transposition). But it’s a bit silly to allocate two arrays just to subtract two numbers.
I still have some doubts:
why does it return a vector with only one element and not a matrix with only one element?
For what (certainly excellent) reasons was it decided to differentiate the result in the two cases, also given that
julia> [1;1]' == [ 1 1]
true ?
[1 2]
in Julia is a 1x2 matrix (a 2d array). [3, 4]
is a 1d array, interpreted as 2-component column vector. Any m \times n matrix times any n-component column vector yields an m-component column vector, so [1 2] * [3, 4]
must therefore be a 1-component column vector (a 1d array with 1 element).
See the talk I linked and the issue it is based on (Taking matrix transposes seriously · Issue #20978 · JuliaLang/julia · GitHub).
TLDR: the adjoint x^* (= transpose x^T for real vectors) of a column vector should “really” be a covector such that x^* y is an inner product, yielding a scalar. (The whole reason why transposition is an important operation in linear algebra is due to its relationship to inner products.) Correspondingly, you really want x' * y
to yield a scalar, not a 1-component vector, and all sorts of conventional linear-algebra expressions break if it doesn’t. On the other hand, it’s also extremely common in linear algebra to identify covectors with “1-row matrices”, which is “really” just a very convenient isomorphism. So, as a compromise, after much discussion, it was eventually decided to make x'
a special type (initially RowVector
IIRC, now Adjoint{eltype(x), typeof(x)}
), that acts like a covector in inner products (so that x'y
returns a scalar) but which acts like a 1-row matrix for other operations (e.g. comparisons, stacking, iteration, …).
(Matlab sidesteps this whole issue because it equates scalars with 1 \times 1 matrices and column vectors with n \times 1 matrices … they are literally the same type under the hood. Introductory linear-algebra classes usually gloss over these distinctions too — when doing math by hand, you can usually tell what is intended from context, implicitly — but every semester there is inevitably some student question on whether 1x1 matrices are equivalent to scalars.)
Another consideration here is probably type stability. It would be quite bad if *(::Matrix{Float64}, Vector{Float64})
could return either Float64
or Vector{Float64}
. With the Adjoint
wrapper this is no longer ambiguous: *(::Adjoint{Float64, Vector{Float64}}, ::Vector{Float64})
always gives Float64
while the matrix variants always give Vector{Float64}
.