Is it possible with SumOfSquares to inspect the multipliers of the polynomial decomposition?

Let’s say I have a poly p and I wish to maximize it over a variety F defined by polys f_i,..,f_n. I assume internally the constraint is something like p = s_0 + \sum_i s_i(x)*f_i(x), with s_0 being square and s_i such that the product degree does not exceed that of p, right? Now is it possible to extract the s_i’s?

What I tried results in an error, but I’m not even sure I’m doing it correctly. Here is where I got stuck.

using SumOfSquares
using JuMP
using DynamicPolynomials
using MosekTools
@polyvar x y
p = x
K = @set x == 1 && y == 0 && x^2 + y^2 == 1
model = SOSModel(Mosek.Optimizer)
@variable(model, α)
@objective(model, Max, α)
@constraint(model, c, p >= α, domain = K)
optimize!(model)
lagrangian_multipliers(c)

This fails with

LoadError: ArgumentError: Bridge of type `SOSPolynomialBridge` does not support accessing the attribute `SumOfSquares.LagrangianMultipliers(1)`. If you encountered this error unexpectedly, it probably means your model has been reformulated using the bridge, and you are attempting to query an attribute that we haven't implemented yet for this bridge. Please open an issue at https://github.com/jump-dev/MathOptInterface.jl/issues/new and provide a reproducible example explaining what you were trying to do.

and I would have opened the issue if I knew what I was doing

Because these are equalities, we compute a Gröbner basis. So if s is the Sum-of-Squares decomposition (that you can obtain with gram_matrix(c)), we constrain that rem(p - α - s, ideal(K)) is zero. You can do

I = ideal(K)
SemialgebraicSets.compute_gröbner_basis!(I)
divrem(p - α - s, I.p)

Eeek! The Groebner bases of the varieties I’m interested in push the limits of even msolve. I was just trying to glean some structure of the problem by inspecting the multipliers. There are inequality-constraints on the variables in my real problems, but they should be irrelevant for now.

It is still possible to use SumOfSquares with the polynomial variables, no?

vars = @polyvar x y
obj = x^2+y
K = [x - 1, y, x^2 + y^2 - 1]
ids = eachindex(K)
mb0 = monomials(vars, 0:maxdegree(obj))
mbs = [monomials(vars, 0:maxdegree(obj)-maxdegree(p)) for p in K]
model = SOSModel(Mosek.Optimizer)
@variable(model, α)
@variable(model, s0, SOSPoly(mb0))
@variable(model, si[i in ids], Poly(mbs[i]))
@objective(model, Max, α)
@constraint(model, c, obj - α == s0 + mapreduce(*, +, si, K))
optimize!(model)
gm_s0 = value(s0)
sol_α = value(α)
sol_s0 = collect(gm_s0.basis)' * gm_s0.Q * collect(gm_s0.basis)
sol_si = value.(si)
@assert obj - sol_α ≈ sol_s0 + mapreduce(*, +, K, sol_si)

This seems to work. Not sure how it will scale yet, though.

For inequality constraints, it will create SOS multipliers. Your code should work as well, it’s a bit less scalable but should be managable I’m planning to add it as option in the coming few weeks

It is still possible to use SumOfSquares with the polynomial variables, no?

No the domain keyword is for convenience but it’s really not doing anything so fancy. It’s quite easy to bypass it once you know a bit about the Putinar certificate etc…