JULIA matrix norm is different MATLAB

Hi,
when I calculate the norm of vector a3, I get different values ​​in julia vs MatLab

julia

matlab

If I copied your matrix correctly:

julia> a3 = [0 sqrt(2)/2 sqrt(2) sqrt(2)/2 0 -sqrt(2)/2 -sqrt(2)  -sqrt(2)/2;
             -sqrt(2)/2 -sqrt(2) 0 sqrt(2)/2 sqrt(2) -sqrt(2)/2 0 -sqrt(2)/2]
2×8 Matrix{Float64}:
  0.0        0.707107  1.41421  0.707107  0.0      -0.707107  -1.41421  -0.707107
 -0.707107  -1.41421   0.0      0.707107  1.41421  -0.707107   0.0      -0.707107

julia> norm(a3)
3.464101615137755

the result looks correct to me: the sum of the squares of the elements is 12, it’s square root is 3.464…, which is the 2-norm. Does Matlab give the 2-norm as well with the norm function?

1 Like

https://www.mathworks.com/help/matlab/ref/norm.html#bvhji30-1

wait , no

n = norm( X ) returns the 2-norm or maximum singular value of matrix X , which is approximately max(svd(X)) .

1 Like

Matlab’s norm gives you the operator norm for matrices, whereas Julia’s norm will always give you the standard L2 norm. The equivalent in Julia would be opnorm:

julia> using LinearAlgebra

julia> a3 = [0 sqrt(2)/2 sqrt(2) sqrt(2)/2 0 -sqrt(2)/2 -sqrt(2)  -sqrt(2)/2;
             -sqrt(2)/2 -sqrt(2) 0 sqrt(2)/2 sqrt(2) -sqrt(2)/2 0 -sqrt(2)/2]
2×8 Matrix{Float64}:
  0.0        0.707107  …  -1.41421  -0.707107
 -0.707107  -1.41421       0.0      -0.707107

julia> opnorm(a3)
2.5495097567963927
2 Likes

Perhaps this might be worth mentioning in Noteworthy Differences from other Languages · The Julia Language, since it is a bit of a gotcha, if anyone cares to make a PR.

3 Likes

Actually, the previous posts have it backwards, as far as the names of the norms are concerned. For matrices, it’s opnorm that gives you the induced matrix 2-norm, \|A\|_2 = \sup_{x\neq0} \|Ax\|_2/\|x\|_2 = max singular value, whereas norm gives the root-sum-squares Frobenius norm, \|A\|_F = \sqrt{\sum_{i,j} A^2_{ij}}.

E.g.

julia> A = [1 1; 2 0]
2×2 Matrix{Int64}:
 1  1
 2  0

julia> opnorm(A)
2.2882456112707374

julia> svdvals(A)
2-element Vector{Float64}:
 2.2882456112707374
 0.8740320488976421

julia> norm(A)
2.449489742783178

julia> sqrt(1^2 + 1^2 + 2^2 + 0^2)
2.449489742783178

That’s what documentation says as well

help?> opnorm
search: opnorm

  opnorm(A::AbstractMatrix, p::Real=2)

  Compute the operator norm (or matrix norm) induced by the vector p-norm, where valid values of p are 1,
  2, or Inf. (Note that for sparse matrices, p=2 is currently not implemented.) Use norm to compute the
  Frobenius norm.

and

help?> norm
search: norm normpath normalize normalize! opnorm issubnormal UniformScaling ColumnNorm set_zero_subnormals

  norm(A, p::Real=2)

  For any iterable container A (including arrays of any dimension) of numbers (or any element type for
  which norm is defined), compute the p-norm (defaulting to p=2) as if A were a vector of the
  corresponding length.

Note the “as if A were a vector of the corresponding length.” I.e. unpack A into a vector and compute the 2-norm of that vector. That is totally different from the induced matrix 2-norm.

4 Likes

Matlab’s norm applied to a matrix gives the induced matrix 2-norm, equal to the matrices’ largest singular value.

Julia’s norm applied to a matrix gives the Frobenius norm, equal to the root sum of squares of the matrix elements.

Presumably Julia uses the Frobenius norm because it’s way cheaper to compute root sum of squares than an SVD.

And also, the title of the OP is wrong. This is a different of matrix norms, not vector norms.

3 Likes