Hello, Does someone know what exactly happens in background when we do `[1, 2, 3, 4] / [1, 2, 3, 4]`

. For `[1, 2, 3, 4] ./ [1, 2, 3, 4]`

is clear that itβs doing element-wise division as shown below.

But what about `[1, 2, 3, 4] / [1, 2, 3, 4]`

?

Hello, Does someone know what exactly happens in background when we do `[1, 2, 3, 4] / [1, 2, 3, 4]`

. For `[1, 2, 3, 4] ./ [1, 2, 3, 4]`

is clear that itβs doing element-wise division as shown below.

But what about `[1, 2, 3, 4] / [1, 2, 3, 4]`

?

Itβs doing a matrix division:

```
help?> /
[...]
ββββββββββββββββββββββββββββββββββββββββββββββββββββββ
A / B
Matrix right-division: A / B is equivalent to (B' \ A')' where \ is the
left-division operator. For square matrices, the result X is such
that A == X*B.
```

1 Like

What I know about matrix division is that its basically multiplication with inverse of other Matrix so like X = A * inv(B) is same as X = A/B and I guess it is specific for square matrices.

Now I played around with \ (left division operator) and it seems its works in same way just changes dividend and divisor. So, A/B == B\A.

If thats the case, I will expect

```
julia> [1 2 3 4]\ [1 2 3 4]
4Γ4 Matrix{Float64}:
0.0333333 0.0666667 0.1 0.133333
0.0666667 0.133333 0.2 0.266667
0.1 0.2 0.3 0.4
0.133333 0.266667 0.4 0.533333
```

and

```
julia> [1 2 3 4]/ [1 2 3 4]
1Γ1 Matrix{Float64}:
0.9999999999999999
```

to be same which is true in case of sqaure matrix. So, what Iβm wondering is how exactly that division happens when I do ` [1 2 3 4] / [1 2 3 4]`

?

Essentially if A and B are vectors of the same length there are two ways of multiplying them, either as

- (1 by n matrix) * (n by 1 matrix) = (1 x 1 matrix), or
- (n by 1 matrix) * (1 by n matrix) = (n by n matrix).

The two possibilities are what youβre seeing here.

1 Like

In my case `[1 2 3 4] \ [1 2 3 4]`

both are `1*4`

Matrix. And In case of vector I think it always taken as n*1 matrix if we try to multiply it to a matrix example below.

I understood what you were trying to say about matrix multiplication but I still dont see it answering my question about how that division take place. .

1 Like

For rectangular matrices this division corresponds to multiplication by the Moore-Penrose pseudoinverse. None of these results are going to make much sense unless you know that. You can see the pseudoinverse of a matrix (or vector) with the function `LinearAlgebra.pinv`

.

3 Likes

The exact way can be found with `@edit [1, 2, 3, 4] / [1, 2, 3, 4]`

, which shows me this on my machine:

```
function (/)(A::AbstractVecOrMat, B::AbstractVecOrMat)
size(A,2) != size(B,2) && throw(DimensionMismatch("Both inputs should have the same number of columns"))
return copy(adjoint(adjoint(B) \ adjoint(A)))
end
```

As the docstring says, it takes the adjoint and then does left division, taking the adjoint of the result again.

2 Likes

Just to flesh this out a touch: the pseudoinverse of a vector is the transpose divided by itβs squared magnitude (see here). ie if `v`

is a column vector then `pinv(v)`

is a row vector, and vice-versa.

The left (`\`

) and right (`/`

) division is just whether you invert the first or second argument, so you get the two situations I first described: multiplication of (1 x n) with (n x 1) or the other way around.

4 Likes

Thanks I reproduced result with concept of pseudo-inverse.

I will put results here. So, it can be helpful for others

Most important think is this :

where x* is conjugate transpose.

```
julia> [1, 2, 3, 4] / [1, 2, 3, 4]
4Γ4 Matrix{Float64}:
0.0333333 0.0666667 0.1 0.133333
0.0666667 0.133333 0.2 0.266667
0.1 0.2 0.3 0.4
0.133333 0.266667 0.4 0.533333
```

Now issue was how that division occurs. In normal scenarios we find A/B by A * B-1 .

So, similarly we can do here by finding pseudo-inverse for [1, 2, 3, 4]

```
julia> INV = [1 2 3 4] ./ ([1 2 3 4] * [1 ,2 ,3, 4])
1Γ4 Matrix{Float64}:
0.0333333 0.0666667 0.1 0.133333
julia> [1, 2, 3, 4] * INV
4Γ4 Matrix{Float64}:
0.0333333 0.0666667 0.1 0.133333
0.0666667 0.133333 0.2 0.266667
0.1 0.2 0.3 0.4
0.133333 0.266667 0.4 0.533333
```

Let me know if you guys donβt see any issue.

Thanks everyone for your inputs

3 Likes

Yes. You picked that up quickly. More generally for a wide matrix A with full rank the pseudoinverse is A^\dagger = A^T (AA^T)^{-1} and A^\dagger b gives the unique solution of minimum norm to an underdetermined system Ax=b. For a tall matrix A with full rank it is A^\dagger = (A^T A)^{-1} A^T and A^\dagger b gives a unique least squares solution to the overdetermined system Ax=b. Even more generally, for any size matrix that is not necessarily of full rank, multiplying by the pseudoinverse gives the least squares solution of minimum norm, which is unique.

2 Likes