# Finding matrices with the same l1 and l2 norm

I want to begin with a matrix, A, and find a matrix where the l2 norm is the same as it is for A, but the l1 norm is reduced so that it is the same as the l2 norm. There is only 1 eigenvalue, which is the square of the l2 norm.

I also need to be able to reverse whatever this transformation is.

Is there a way to do this in Julia already prepared somewhere?

Is there some other relation that the new matrix must have with the original? If you just want a matrix with L1 and L2 norms set to the same value, you can do this

julia> σ = 19/4
4.75

julia> A = [σ 0; 0 0]
2×2 Matrix{Float64}:
4.75  0.0
0.0   0.0

julia> opnorm(A,1)
4.75

julia> opnorm(A,2)
4.75


Thanks - sorry, I forgot to mention, it should be a rotation of the original.

julia> using LinearAlgebra

julia> x = rand(3); A = x * x' # A is some random rank-1 matrix
3×3 Matrix{Float64}:
0.0325498  0.0425725  0.16186
0.0425725  0.0556812  0.211699
0.16186    0.211699   0.804876

julia> opnorm(A, 1) ≈ opnorm(A, 2)
false

julia> (λ, U) = eigen(A);

julia> λ[3] ≈ opnorm(A, 2)
true

julia> B = U' * A * U # B is rotation of A (U is unitary)
3×3 Matrix{Float64}:
1.60678e-19  -9.80364e-19  -5.815e-17
3.23819e-18  -4.94158e-18  -1.73788e-16
-2.77556e-17  -1.66533e-16   0.893107

julia> opnorm(B, 1) ≈ opnorm(B, 2) ≈ opnorm(A, 2)
true

julia> C = U * B * U'; C ≈ A # Transformation is reversible
true


EDIT: This code assumes the matrix is symmetric/Hermitian (i.e., has a unitary eigendecomposition). See my next post for a more general solution.

1 Like

Amazing, thank you!!

I am just having a small error with

julia> λ[5.6141820573374055] ≈ opnorm(A, 2)


when the value includes a decimal place:

ERROR: ArgumentError: invalid index: 5.6141820573374055 of type Float64


You can’t index a vector at a non-integer index. What did you want to do?

Ah, it should have been my matrix size in there, not the eigenvalue…

λ[10] ≈ opnorm(A, 2)


But unfortunately

B = U' * A * U


doesn’t work for my matrix:

opnorm(B, 1) ≈ opnorm(B, 2) ≈ opnorm(A, 2)


is false
opnorm(B, 1) = 7.815593862067465
opnorm(B, 2) = 5.637588182823125
opnorm(A, 2) = 5.6141820573374055

Sorry, I just realized I assumed your matrix was symmetric/Hermitian (i.e., had a unitary eigendecomposition). More generally, you want to use the singular value decomposition:

julia> using LinearAlgebra

julia> x = rand(3); y = rand(3); A = x * y' # A is some random rank-1 matrix
3×3 Matrix{Float64}:
0.206881  0.02146    0.00257651
0.139733  0.0144947  0.00174024
0.395132  0.0409876  0.004921

julia> opnorm(A, 1) ≈ opnorm(A, 2)
false

julia> (U, σ, V) = svd(A);

julia> σ[1] ≈ opnorm(A, 2)
true

julia> B = U' * A * V # B is rotation of A (U and V are unitary)
3×3 Matrix{Float64}:
0.469935     1.04626e-17  -8.67362e-19
2.86845e-17  7.47916e-18  -3.92335e-19
-0.0          0.0           0.0

julia> opnorm(B, 1) ≈ opnorm(B, 2) ≈ opnorm(A, 2)
true

julia> C = U * B * V'; C ≈ A # Transformation is reversible
true

2 Likes

What an easy way to do it! Is there a similar trick for non square matrices?
Can you know how much this rotates it? Or is it tailored for each matrix?

The same code works regardless of the dimensions of the matrix (all matrices, square or not, have a singular value decomposition).

I’m not sure what you mean by this. Once you compute the SVD of the matrix, then yes, you know how much the matrix is rotated (here rotation is a multidimensional generalization of what we typically envision when we think of rotation).

Yes, each matrix has its own SVD. So you can’t precompute U and V and have it work for any arbitrary matrix, if that’s what you’re asking. Though if you have an application where all your matrices share the same left and right singular vectors (U and V matrices), then you could compute the SVD just once and reuse U and V. But I’m not aware of any such application.

Thanks - unfortunately it’s not working the same way with non-square matrices. The resulting matrix is a different size, and the l1 norm is actually smaller than the l2 a lot of the time. This might just be the way it has to be, I guess?
I tried doing full matrices and then adding zeros to the end of S and V, but this didn’t help.

I was wondering whether whenever you rotate by U.T it would rotate ‘A’ a specific amount from what it would be if you had done U*A*V, like 90 degrees, or whether the actual rotation from U to U.T depends on what the actual rotation matrix ‘U’ looks like…

Anyway, it’s great to know this for square matrices, thank you!

Strange, because it worked for me for a random 3 \times 2 matrix (and a random 2 \times 3 matrix). Could you provide an example matrix for which it failed?

This is, again, strange, because it is theoretically impossible. An example of a matrix that came to this result would be useful here, too.

The U and V matrices returned by svd (with the full keyword argument set to true) are unitary matrices (where a unitary matrix is a generalization of a rotation matrix). A matrix U is unitary iff U'U = UU' = I. Thinking about it as a rotation, if U rotates by \theta, then U' rotates by -\theta (rotation by U' undoes the rotation by U).

2 Likes

Thanks, that’s really helpful.

And… it does work! I had python open yesterday when I looked at this, so I just did it in there. Seems like it might be a bug there.

I was hoping that doing a rotation whereby the l1 and l2 norms are the same might be the rotation with the minimum singular values, since that’s what it would be for σ. I did find a rotation that does minimise them today though. I’m not sure it will be helpful for what i’m doing though, so I will try with both. Thank you!