JULIA matrix norm is different MATLAB

when I calculate the norm of vector a3, I get different values ​​in julia vs MatLab



If I copied your matrix correctly:

julia> a3 = [0 sqrt(2)/2 sqrt(2) sqrt(2)/2 0 -sqrt(2)/2 -sqrt(2)  -sqrt(2)/2;
             -sqrt(2)/2 -sqrt(2) 0 sqrt(2)/2 sqrt(2) -sqrt(2)/2 0 -sqrt(2)/2]
2×8 Matrix{Float64}:
  0.0        0.707107  1.41421  0.707107  0.0      -0.707107  -1.41421  -0.707107
 -0.707107  -1.41421   0.0      0.707107  1.41421  -0.707107   0.0      -0.707107

julia> norm(a3)

the result looks correct to me: the sum of the squares of the elements is 12, it’s square root is 3.464…, which is the 2-norm. Does Matlab give the 2-norm as well with the norm function?

1 Like


wait , no

n = norm( X ) returns the 2-norm or maximum singular value of matrix X , which is approximately max(svd(X)) .

1 Like

Matlab’s norm gives you the operator norm for matrices, whereas Julia’s norm will always give you the standard L2 norm. The equivalent in Julia would be opnorm:

julia> using LinearAlgebra

julia> a3 = [0 sqrt(2)/2 sqrt(2) sqrt(2)/2 0 -sqrt(2)/2 -sqrt(2)  -sqrt(2)/2;
             -sqrt(2)/2 -sqrt(2) 0 sqrt(2)/2 sqrt(2) -sqrt(2)/2 0 -sqrt(2)/2]
2×8 Matrix{Float64}:
  0.0        0.707107  …  -1.41421  -0.707107
 -0.707107  -1.41421       0.0      -0.707107

julia> opnorm(a3)

Perhaps this might be worth mentioning in Noteworthy Differences from other Languages · The Julia Language, since it is a bit of a gotcha, if anyone cares to make a PR.


Actually, the previous posts have it backwards, as far as the names of the norms are concerned. For matrices, it’s opnorm that gives you the induced matrix 2-norm, \|A\|_2 = \sup_{x\neq0} \|Ax\|_2/\|x\|_2 = max singular value, whereas norm gives the root-sum-squares Frobenius norm, \|A\|_F = \sqrt{\sum_{i,j} A^2_{ij}}.


julia> A = [1 1; 2 0]
2×2 Matrix{Int64}:
 1  1
 2  0

julia> opnorm(A)

julia> svdvals(A)
2-element Vector{Float64}:

julia> norm(A)

julia> sqrt(1^2 + 1^2 + 2^2 + 0^2)

That’s what documentation says as well

help?> opnorm
search: opnorm

  opnorm(A::AbstractMatrix, p::Real=2)

  Compute the operator norm (or matrix norm) induced by the vector p-norm, where valid values of p are 1,
  2, or Inf. (Note that for sparse matrices, p=2 is currently not implemented.) Use norm to compute the
  Frobenius norm.


help?> norm
search: norm normpath normalize normalize! opnorm issubnormal UniformScaling ColumnNorm set_zero_subnormals

  norm(A, p::Real=2)

  For any iterable container A (including arrays of any dimension) of numbers (or any element type for
  which norm is defined), compute the p-norm (defaulting to p=2) as if A were a vector of the
  corresponding length.

Note the “as if A were a vector of the corresponding length.” I.e. unpack A into a vector and compute the 2-norm of that vector. That is totally different from the induced matrix 2-norm.


Matlab’s norm applied to a matrix gives the induced matrix 2-norm, equal to the matrices’ largest singular value.

Julia’s norm applied to a matrix gives the Frobenius norm, equal to the root sum of squares of the matrix elements.

Presumably Julia uses the Frobenius norm because it’s way cheaper to compute root sum of squares than an SVD.

And also, the title of the OP is wrong. This is a different of matrix norms, not vector norms.