To grok "vectors..."

I got the concept of mathematical vectors, and geometrical vectors. However, having some mental blockage of relating those to how they fit the programming models in Julia. Can someone point me toward a (simpleton) way of looking at vectors as used in Julia programs, please?

I don’t understand exactly what you mean by this:

Can you expand on it at all?

I just look at them as one-dimensional arrays/matrices…is there something in particular you find confusing, or could you provide a code example we could look at?


Depending on your background and disposition, this video may also help in grokking the Julia approach to vectors.


In computer science, and Julia, a Vector is also just a 1-D array, while a Matrix is a 2-D array, and a N-D array for N >= 3 is a tensor, which we just refer to as Array in Julia.

Vector could be used to represent a mathematical vector or the geometrical vector, although we often prefer a Tuple for small versions of those.


Several Julia docs say that all numbers for Julia are vectors. Ok, in concept, I can -call- the number 10, by itself, a vector but operationally, in a program/executing code it is still a single value. You have 10 cookies. Now, if you stand them on end, like dominoes, they appear as a row vector of 10 individual cookies. Then, stack them on top of each other, you have a column of 10. Moving the row of cookies to that column of cookies can be thought of as a transpose. All the above are examples of what I think of as vectors of a single dimension. Eat 9 of those cookies, you no longer have a vector; you now have a scalar. But Julia still sees the remaining cookie as a vector? That is mathematically correct (?) but in reality is incorrect. I have enjoyed using n-dimensional numerical arrays throughout my programming career. I have never, ever had a need to perform a maxtix-multiply. Until I started flying but that’s a different story… Note: I could not sit down for you and solve a matrix-multiply for you on paper as it has been waay too long. I cannot visualize when or how to use a matrix-multiply anymore. So a scalar being represented as a vector seems to go against common understanding in my little world. And as the referenced video shows, many of us are in the same boat.

Well, one cookie is a stack of one cookie. A very short stack, but still a stack :laughing:

1 Like


I think Julia makes that differentiation quite clearly:

julia> 1 + [1]
ERROR: MethodError: no method matching +(::Int64, ::Vector{Int64})

A vector is a container. The vector is the box of cookies, which is still a vector having one or zero cookies inside.


No. All numbers are iterators shaped like zero-dimensional arrays, though. Also see this section in the FAQ: What are the differences between zero-dimensional arrays and scalars?

Numbers being iterators with shape leads to this behavior:

julia> size(7)

julia> length(7)

FTR many believe that making all Numbers iterators was a design mistake, but it is what it is, now, in any case.


Man, you completely lost me with this one. Which is why I need to understand Julia better and asked for help. :frowning:

What does the expression: 1 + [1] mean? In all other languages I know, it just means “add 1 + 1.”

In Julia, it does not mean anything. This results in an error.

We do allow for broadcasting operations by prefixing . in front of + operators.

julia> 1 .+ [1]
1-element Vector{Int64}:

In Julia it does not mean anything, as well as in python:

>>> 1 + [1]
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
TypeError: unsupported operand type(s) for +: 'int' and 'list'

In some languages it means “add 1 to all elements of the vector [1]”. In Matlab (I think) it means 1 + 1.

But what I wanted to illustrate is that in Julia there is a clear distinction between a scalar and a vector of only one element, to that point that you cannot sum a scalar to a vector, because that’s not a well defined mathematical operation.

(If you want to sum the same scalar to all elements of a vector, use 1 .+ [1,2], with the “dot”, to indicate broadcasting).

numpy interprets it like a broadcasting:

>>> import numpy as np
>>> x = np.array((1,2))
>>> x
array([1, 2])
>>> 1 + x
array([2, 3])

[arbitrary number] always indicates a vector, even an empty one, and
[1] indicates an index? Or an entity with only one value?, and
[1 3 7] indicates a 3d vector with one row, 3 columns, and 7 z’s?
while [1, 3, 7] indicates what?
Color me so Dazed and Confused. :slight_smile: :slight_smile:

Square brackets in Julia are used for many purposes in Julia:

  • [] is a 0-element vector
  • [x] is a 1-element vector that only contains x. [42] is a one-element vector that contains the number 42.
  • [x, y] is a 2-element vector that contains x at its first index and y at its second.
  • [x y] is syntax for horizontally concatenating x and y together. If x and y are both numbers like 42 and 7, then [42 7] is a matrix with one row.
  • You can also use square brackets with other separators to represent vertical concatenation or even with generators to programmatically fill the array with a for loop.

These are all just ways of specifying an array and what’s inside it. Note that vectors are just 1-dimensional arrays, and they’re typically treated as a single “column” of a matrix.

These are also all wholly distinct from how Julia uses A[1] as an indexing syntax — but of course you very commonly index into arrays. :slight_smile:

julia> [1] # [1] is a Vector of length 1
1-element Vector{Int64}:

julia> typeof(ans)
Vector{Int64} (alias for Array{Int64, 1})

julia> []

julia> typeof(ans) # [] creates an empty vector of length 0
Vector{Any} (alias for Array{Any, 1})

julia> a = [1 3 7] # without any commas this is a row matrix
1Ă—3 Matrix{Int64}:
 1  3  7

julia> typeof(ans)
Matrix{Int64} (alias for Array{Int64, 2})

julia> b = [1, 3, 7] # with commas, this is a column vector
3-element Vector{Int64}:

julia> typeof(ans)
Vector{Int64} (alias for Array{Int64, 1})

julia> a[1,3] # brackets after a variable retrieves the element at the indicated indices

julia> a[2] # linear indexing is also supported

julia> b[3]

Aha! Now it starts to percolate!! Thanks, guys.

1 Like

The main takeaway of the “Taking vector transposes seriously” journey is that there is no perfect design here: no matter how you do it, some aspect of the system is a bit awkward. Mathematics avoids this because the interpreter is a human who is smart enough to unconsciously paper over the inconsistencies and do the right thing.

Matlab makes the compromise that everything has at least two dimensions—there are no true scalars in Matlab, there are just 1x1 matrices. It happens that 1x1 matrices, single-element vectors, and scalars all behave similarly enough in most situations that this is workable. There are, however, situations where you would want to distinguish between a scalar and single-element vector and a 1x1 matrix and Matlab ends up doing awkward and dangerous stuff like special casing the behavior of row matrices and column matrices and “scalar matrices” by looking at the dimensions that some array happens to have, which fails quite badly when they have singleton dimensions by accident. This makes writing reliable software in Matlab quite challenging—even very basic built-in functions do wildly different things based on runtime value, such as array dimensions.

Julia has true and distinct scalars, vectors and matrices with different types, which are not treated as the same. It is, however, still often a good idea to allow them to be behaviorally fungible in many situations. If I have code that works for a scalar, should it not also work for a 0-dimensional array and vice versa? We did make an effort in the lead up to Julia 1.0 to remove the iterability of scalars, but it turned out to be pretty disruptive to do so—a suprisingly large amount of code implicitly relies on scalars behaving like a zero-dimensional collection.

The biggest complication to the fairly simple n-d array design is the subject of #4774 and is pretty clearly outlined in the first few posts: the initial post lays out the naïvely desired behavior and the first couple of responses point out why it’s not possible:

  • If transposing a vector gives you the same vector, then you can’t make inner and outer products of vectors do different things: i.e. v'*w versus v*w': if v' is just v and w' is just w then these are the same.
  • If transposing a vector gives you a row matrix, then the w'*v is a matrix-vector product which produces a vector, not a scalar since the product is a matvec.

The first point implies that either transposing a vector gives you something other than a plain vector, or that inner and outer products use different functions, rather than being methods of the common * multiplication operator. The second point implies that if we use a plain row matrix for vector transposes, then we must either abandon the v'*w notation for the inner prouct of vectors, or we must be ok with the result being a single-element vector rather than a scalar.

The conclusion of that discussion was to preserve the classic notations:

  • v*w' is an outer product producing a matrix
  • v'*w is an inner product and produces a scalar result

The only way to do this is to have v' produce something that is neither a plain vector nor a plain row matrix. That something is called Adjoint and is a effectively a specialized row matrix type which when multiplied with a vector produces a scalar and otherwise mostly behaves like a matrix whose first dimension is one. This was deemed to be the least annoying and most convenient solution.


And I thought I had it understood.