Actually, the previous posts have it backwards, as far as the names of the norms are concerned. For matrices, it’s opnorm
that gives you the induced matrix 2-norm, \|A\|_2 = \sup_{x\neq0} \|Ax\|_2/\|x\|_2 = max singular value, whereas norm
gives the root-sum-squares Frobenius norm, \|A\|_F = \sqrt{\sum_{i,j} A^2_{ij}}.
E.g.
julia> A = [1 1; 2 0]
2×2 Matrix{Int64}:
1 1
2 0
julia> opnorm(A)
2.2882456112707374
julia> svdvals(A)
2-element Vector{Float64}:
2.2882456112707374
0.8740320488976421
julia> norm(A)
2.449489742783178
julia> sqrt(1^2 + 1^2 + 2^2 + 0^2)
2.449489742783178
That’s what documentation says as well
help?> opnorm
search: opnorm
opnorm(A::AbstractMatrix, p::Real=2)
Compute the operator norm (or matrix norm) induced by the vector p-norm, where valid values of p are 1,
2, or Inf. (Note that for sparse matrices, p=2 is currently not implemented.) Use norm to compute the
Frobenius norm.
and
help?> norm
search: norm normpath normalize normalize! opnorm issubnormal UniformScaling ColumnNorm set_zero_subnormals
norm(A, p::Real=2)
For any iterable container A (including arrays of any dimension) of numbers (or any element type for
which norm is defined), compute the p-norm (defaulting to p=2) as if A were a vector of the
corresponding length.
Note the “as if A were a vector of the corresponding length.” I.e. unpack A into a vector and compute the 2-norm of that vector. That is totally different from the induced matrix 2-norm.