Vector*matrix in 0.6

This caught me by surprise:

julia> ones(10)*ones(1,10)
ERROR: DimensionMismatch("Cannot left-multiply a matrix by a vector")
 [1] *(::Array{Float64,1}, ::Array{Float64,2}) at ./linalg/rowvector.jl:157

Is this a bug, or intentional? (I don’t see why this wouldn’t make sense…)

Maybe because of
and related to

Yes I believe that is the cause, I’m more wondering about the why? It seems to me that the multiplication above should still work as the dimensions are consistent, unless Vector no longer means column vector (in which case, I would expect a type ColumnVector)

I think it was discussed in one of the comments, for instance here.

That’s a very long discussion, I didn’t see this issue addressed .

Note that this issue is completely independent of whether a RowVector exists. It’s just a matter of a column vector times a 1 x matrix is mathematically well-defined.

Agreed. It seems that this is such a fundamental operation that it should be in the test suite.

OK I’ve created an issue:

Maybe this is just a broadcast operation?

julia> ones(3).*ones(1,3)
3×3 Array{Float64,2}:
 1.0  1.0  1.0
 1.0  1.0  1.0
 1.0  1.0  1.0

To me, the question is in reverse - why did this method exist in v0.5?

The (only) reason is it was a way of obtaining the outer product of two vectors. You can now do vector * rowvector for this. In fact, this is still a (specialized) method of *(::AbstractVector, ::AbstractMatrix).

Otherwise vector * matrix doesn’t make too much sense (outside of MATLAB/Householder notation). You wouldn’t impress a math lecturer with writing B = v.A in an assignment/exam, and it would be a misuse of Dirac notation to write |psi> H and expect an operator back. The correct things are B = v.wᵀ and O = |psi><phi|.

Believe me, I pained over this one, but really its not difficult to write ones(10) * ones(10)' for the outer product (in fact, it’s one less character :slight_smile: ). Since the *(A::AbstractVector, B::AbstractMatrix) method was an error if size(B, 1) was not 1, it is implied that the programmer knows for sure B is a 1xN shaped Array, which is guaranteed by B::RowVector.

Maybe this is just a broadcast operation?

The implementation is precisely this.

If we view a vector as a 1-column matrix, then vector * (1-row matrix) makes perfect sense in standard linear algebra…

(I agree that there is some tension between this viewpoint and viewing vectors as living in some kind of abstract finite-dimensional Hilbert space with matrices as linear operators on them. But this seems like a case where we might as well continue to support both viewpoints, since most users will expect it.)

Ehm, @dlfivefifty is a math professor…


Ehm, @dlfivefifty is a math professor…

Thanks for the heads up… :slight_smile:

EDIT: to clarify, you wouldn’t have impressed me as a physics lecturer or tutor, but that doesn’t mean there’s not a better way.

If we view a vector as a 1-column matrix, then vector * (1-row matrix) makes perfect sense in standard linear algebra…

Right. The point is that, in Julia, I really, really, don’t view a vector as a 1-column matrix. They are unlikely to dispatch similarly in many, many contexts.

This is precisely the Householder notation, which is entirely self-consistent and embraced by MATLAB. On the other hand, Julian’s have held it as a point of honor that there is a very real distinction between Vector and Matrix, unlike in MATLAB.

Without RowVector, the linear algebra bits had been living in some kind of half-way land between Householder notation and making this distinguishment. It’s fair to say that now I’ve tried to implement semantics consistent with Dirac notation. We allow

  • rowvec * vec (inner product)
  • vec * rowvec (outer product)
  • matrix * vec (-> vec)
  • rowvec * matrix (-> rowvec)
  • matrix * matrix (-> matrix)
  • also of course scalar-array multiplication

(Not entirely, because of the case of scalars vs 1x1 matrices.)

I think it is more useful to maximize utility than “points of honor” here. If you have an operation that is not defined in one viewpoint and is defined in the other (rather than cases where the definitions conflict), it makes more sense to me to support it.

Mathematicians generally do not use Dirac notation…if we’ve held out 78 years, you’re unlikely to convince us now. I say Julia should cater to both audiences, and not have unnecessary errors to force a specific viewpoint.

1 Like

lol. Fair point.

I still think on pen-and-paper (in say a first-year maths class) you’d squirm at B = v.A but not B = v.wᵀ, but I don’t care so much on this particular point to hold out.