# Symmetry reduction documentation

Hi,

If our polynomial optimization problem has a particular group symmetry, we can do symmetry reduction to get block diagonal matrices. Doing this involves constructing a symmetry-adapted basis. Since symmetry-adapted basis matrices are not unique, I want to know how @blegat constructed a symmetry-adapted basis for the SumOfSquares package. Is there any document at which I can look to see how symmetry-adapted basis is constructed algorithmically?

In the SumOfSquare documentation, it only says how we can define a group symmetry and tell our optimization to have that symmetry. However, it does not mention how Julia finds the symmetric-adapted basis.

@Khashayar-Neshat the symmetrization is computed through SymbolicWedderburn.jl.

this can be summarised in a few steps:

1. Given the action of `G` on your monomials the action `π` on the whole monomial basis is induced
2. For the space of polynomials `V` (as a vector space with `G`-action) we compute the character of the `(V, G, π)` representation
3. After finding the decomposition of `(V, G, π)` into irreducibles some symbolic magic in the group ring `ℝG` tells us what are the projections onto isotypical summands (that magic is finding the constituent characters which define those projections; sometimes this is followed by finding even tighter minimal projection system)
4. Given the monomial basis we realize those projections as sparse matrices (only now we start computing with matrices, up to this point everything is symbolic)
5. unitary vectors in the images of those projections are found via sparse QR decomposition (these basis vectors form the symmetry adapted basis).

Let me know if you want to know more!

3 Likes

I have three questions:
1-
How do you choose unitary vectors from the QR decomposition since if Q is not full rank, the choice will not be unique?

``````
function image_basis!(
A::AbstractSparseMatrix{T},
::Nothing,
) where {T<:Union{AbstractFloat,Complex}}
F = qr(A)
rk = rank(F)
img = let tmp = F.Q * Matrix(I, size(A, 1), rk)
pinv = getfield(F, :rpivinv)
sparse((@view tmp[pinv, :])')
end
return droptol!(img, _eps(T) * max(size(img)...))
end
``````

But I did not understand how it chooses n=rank(Q) linearly independent columns of Q. Does it choose the first n columns?

2-
If the dimension of irreducible representations (irreps) is greater than 1, then we can decompose to further smaller blocks or, as you said, find tighter projections. How do you find tighter projections? Since the character of an irrep is unique, the projection onto an isotypic component is unique. However, further decomposition is not unique and tighter projections depend on irreps in a matrix form, and we can choose different basis to represent irreps in a matrix form. How do you choose the matrix form of irreps of dimension higher than 1?

3-
There are some books that explain the construction of a symmetry-adapted basis such as Linear Representation of Finite Groups (1977) by Sierre or Group Theoretical Methods and Their Applications (1992) by Fassler & Stiefel. However, these books are old. Since you are an expert in this area, would you tell me which book is appropriate to learn the algorithm you use for symmetry reduction?

@Khashayar-Neshat

1. The dimension of the image (rank) of the projection is well defined; any (orthogonal) basis of the subspace would do; what we do in the piece of code you quoted is:
• compute `qr` decomposition and obtain its rank `rk`
• the first `rk` columns from `Q` are taken as `tmp` (since `Q` is not stored directly but rather as a sequence of Hausholder reflections indexing to `F.Q` is very slow; `F.Q*I` was the fastest way I found to extract it in `Matrix` form).
• then we just reorder its rows and take sparse representation of the transposed one (so that we’re looking at rows, not columns). The shape of `image_basis` (i.e. that rank is the number of rows) is a historical choice; probably I’d return basis in columns now, but the first code used `rref`, so with rows we’re stuck Contributions are welcome 1. Our implementation of projections is matrix-free. Projections are just idempotent elements (`x² = x`) in the group algebra `ℝG`.
• The projection to isotypical component is unique in `ℝG`, but not as an element of `End((V, G, π))` i.e. as a matrix: a matrix representation of a projection is already a choice of `(projection, basis)`. It’s the same difference as linear operator vs matix.
• Tighter projections use a lemma of Schur that (over an algebraically closed field) the commutant of matrix algebra (here: matrix algebra defined by the linear representation of the finite group) consist of particularly simple matrices:
• they are direct sums of endomorphisms of isotypical subspaces (i.e. isotypical subspaces are orthogonal which gives us block structure for endomorphisms, this holds also for non-algebraically closed field where the exponent of the group is invertible)
• within isotypical subspace (of character `χ`) the endomorphisms are of the form `M⊗Iₙ`, where `n = degree(χ)` and `M` is (square) of size `multiplicity(χ, action_character(V, G, π))`. So in the end only `m²` parameters are needed for reconstructing the whole endomorphism; For every irreducible character `ϱ` we try to find a (non-central) projection `pᵨ` such that `ϱ∘pᵨ` is of possible low rank (desirably `1`). We call those `{ϱ∘pᵨ}_{ϱ ∈ Ĝ}` a minimal projection system. Sometimes it exists (symmetric, alternating groups etc.) sometimes it doesn’t (real representations of cyclic groups).
• For specific groups and specific representations other methods are possible (e.g. @dlfivefifty in NumericalRepresentationTheory uses Murphys idempotents paired with explicit description of reps by Young Tableaux, but maybe he can say something more about the method 1. I think all of this stuff about irreducibles can be found in the book by Serre (or any textbook on representation theory). The formulation of the minimal projection is nothing particularly deep, but maybe requires you to work a bit longer with group algebras. A somewhat condensed account of what we’re doing is presented in sections 2 (theory) and 3.3 (particular computations) of an old paper of mine.
I don’t know the other book you mentioned, but it does looks very interesting!

Kind words I just happened to be the first who needed a well rounded package to do all of this. If that makes me an expert, that’s even better 1 Like

Wow, that is a comprehensive answer; thank you very much.

1 Like