# How to efficiently generate constraints

I have a rather large SDP problem which I deconstruct (block diagonalize) to smaller ones; A baby example (~5000Γ5000) in `JuMP-0.18.01` took

``````215.068401 seconds (124.47 M allocations: 231.684 GiB, 16.09% gc time)
``````

(there are 7229 dense constraints, at some point I spent a lot of time trying to optimize this)

Much to my surprise, upgrading to JuMP-0.18.04 caused this:

``````905.073732 seconds (893.81 M allocations: 1.343 TiB, 38.38% gc time)
``````

below You can find some code but let me vaguely describe it. The problem itself is very simple:

• `P` is a vector of SD blocks (JuMP.Variables)
• `M` is a `Vector{Array{Float64, 2}}` of the same length and same dimensions as `P`
• all linear constraints look like `lhs == x - Ξ»*u`,
• where
• `lhs = sum(vecdot(M[Ο], P[Ο]) for Ο in 1:length(P))` (its a sum of traces of products)
• `x` and `u` are `Float64`s

I suspect that `lhs` thing is not the right way, but I couldnβt find anything in the docs.

So here is the code:

``````Ns = size.(data.UΟs, 2) #UΟs: 5000Γn projection matrix
m = JuMP.Model();

P = Vector{Matrix{JuMP.Variable}}(length(Ns))

for (k,n) in enumerate(Ns)
P[k] = JuMP.@variable(m, [i=1:n, j=1:n]) # small semidefinite blocks
JuMP.@SDconstraint(m, P[k] >= 0.0)
end

Ξ» = JuMP.@variable(m, Ξ»)
JuMP.@objective(m, Max, Ξ»)

@time addconstraints!(m,P,Ξ»,X,orderunit, data) # the timing above is from here
``````
``````function addconstraints!(m::JuMP.Model,
P::Vector{Matrix{JuMP.Variable}}, Ξ»::JuMP.Variable,
X::GroupRingElem, orderunit::GroupRingElem, data::OrbitData)

orderunit_orb = orbit_spvector(orderunit.coeffs, data.orbits)
X_orb = orbit_spvector(X.coeffs, data.orbits)
UΟsT = [U' for U in data.UΟs]

cnstrs = constraints(parent(X).pm)
orb_cnstr = spzeros(Float64, size(parent(X).pm)...)

M = [Array{Float64}(n,n) for n in size.(UΟsT,1)]

for (t, orbit) in enumerate(data.orbits)
orbit_constraint!(orb_cnstr, cnstrs, orbit)
constraintLHS!(M, orb_cnstr, data.UΟs, UΟsT, data.dims)

lhs = @expression(m, sum(vecdot(M[Ο], P[Ο]) for Ο in 1:length(P))
x, u = X_orb[t], orderunit_orb[t]
JuMP.@constraint(m, lhs == x - Ξ»*u)
end
end

function constraintLHS!(M, cnstr, Us, Ust, dims)
for Ο in 1:endof(Us)
M[Ο] = dims[Ο].*Ust[Ο]*cnstr*Us[Ο]
end
end
``````

Which line got slower when updating JuMP ? It would help if you could give a minimal example so that we could fix the regression.
EDIT: You could time each line with TimerOutputs and post the timing for Julia v0.18.1 and JuMP v0.18.4.

1 Like

I noticed at one point that there were slowdowns on Julia 0.6 for JuMP 0.18.4 because of changes that were made for compatibility with Julia 1.0. We unfortunately donβt have good tools to help control for this. Have you tried running the code on Julia 1.0?

Sorry for the late answer, I finally had time to look into this. I add only the first 200 constraints & use `@timeit` macros:

``````function PropertyT.addconstraints!(m::JuMP.Model,
P::Vector{Matrix{JuMP.Variable}}, Ξ»::JuMP.Variable,
X::GroupRingElem, orderunit::GroupRingElem, data::PropertyT.OrbitData)

orderunit_orb = PropertyT.orbit_spvector(orderunit.coeffs, data.orbits)
X_orb = PropertyT.orbit_spvector(X.coeffs, data.orbits)
UΟsT = [U' for U in data.UΟs]

cnstrs = PropertyT.constraints(parent(X).pm)
orb_cnstr = spzeros(Float64, size(parent(X).pm)...)

M = [Array{Float64}(n,n) for n in size.(UΟsT,1)]

@timeit to "enumerate loop" for (t, orbit) in enumerate(data.orbits)
t > 200 && break
@timeit to "orbit_constraint" PropertyT.orbit_constraint!(orb_cnstr, cnstrs, orbit)
@timeit to "constraintLHS!" PropertyT.constraintLHS!(M, orb_cnstr, data.UΟs, UΟsT, data.dims)

@timeit to "@expression" lhs = @expression(m, sum(vecdot(M[Ο], P[Ο]) for Ο in 1:endof(data.UΟs)))
x, u = X_orb[t], orderunit_orb[t]
@timeit to "@constraint" JuMP.@constraint(m, lhs == x - Ξ»*u)
end
end
``````

on JuMP `v0.18.4` this takes:

``````βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
Time                   Allocations
ββββββββββββββββββββββ   βββββββββββββββββββββββ
Tot / % measured:            262s / 1.96%           7.04GiB / 98.8%

Section              ncalls     time   %tot     avg     alloc   %tot      avg
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
enumerate loop            1    5.14s   100%   5.14s   6.96GiB  100%   6.96GiB
@expression           200    4.84s  94.1%  24.2ms   6.77GiB  97.3%  34.7MiB
constraintLHS!        200    258ms  5.02%  1.29ms    176MiB  2.46%   899KiB
@constraint           200   29.5ms  0.57%   147ΞΌs   15.9MiB  0.22%  81.4KiB
orbit_constraint      200   12.7ms  0.25%  63.7ΞΌs    163KiB  0.00%        -
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
``````

while exactly the same code produces on `v0.18.1`

``````ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
Time                   Allocations
ββββββββββββββββββββββ   βββββββββββββββββββββββ
Tot / % measured:            275s / 0.16%            432MiB / 52.7%

Section              ncalls     time   %tot     avg     alloc   %tot      avg
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
enumerate loop            1    433ms   100%   433ms    228MiB  100%    228MiB
constraintLHS!        200    242ms  56.0%  1.21ms    176MiB  77.1%   899KiB
orbit_constraint      200    105ms  24.3%   525ΞΌs    425KiB  0.18%  2.12KiB
@expression           200   81.7ms  18.9%   408ΞΌs   48.5MiB  21.3%   248KiB
@constraint           200   3.42ms  0.79%  17.1ΞΌs   3.11MiB  1.36%  15.9KiB
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
``````

@blegat As for MWE is it ok to handle the jld with the exact data?

Ok, hereβs the (not so) M, but WE:

``````function clamp_small!(M::AbstractArray{T}, eps=eps(T)) where T
for n in eachindex(M)
if abs(M[n]) < eps
M[n] = zero(T)
end
end
return M
end

function constraintLHS!(M, cnstr, Us, Ust, dims, eps=1e3*eps(eltype(first(M))))
for Ο in 1:endof(Us)
M[Ο] = dims[Ο].*clamp_small!(Ust[Ο]*cnstr*Us[Ο], eps)
end
end

function create_simple_model(Ns)
m = JuMP.Model()

Ξ» = JuMP.@variable(m, Ξ»>=0.0)

P = Vector{Matrix{JuMP.Variable}}(length(Ns))

for (k,s) in enumerate(Ns)
P[k] = JuMP.@variable(m, [i=1:s, j=1:s])
JuMP.@SDconstraint(m, P[k] >= 0.0)
end
JuMP.@objective(m, Max, Ξ»)
return m, Ξ», P
end

function addconstraints!(m, P, Ξ», A, B, cnstrs, UΟs, dims)
# preallocation
N = size(first(UΟs), 1)
Ns = size.(UΟs, 2)
sp_cnstr = spzeros(Float64, N, N)
M = [Array{Float64}(n,n) for n in Ns];
UΟsT = [U' for U in UΟs]

for t in 1:length(cnstrs)
constraintLHS!(M, cnstrs[t], UΟs, UΟsT, dims)
lhs = @expression(m, sum(vecdot(M[Ο], P[Ο]) for Ο in 1:size(UΟs,1)));
a, b = A[t], B[t]
JuMP.@constraint(m, lhs == a - Ξ»*b);
end
end
``````

Generating simple input data:

``````srand(0)
Ns = rand(1:30, 20) # sizes of SD blocks
N = 500 # size of big constraints : 500Γ500
UΟs = [sprand(N, n, 0.1) for n in Ns] # originally: rank one-projection matrices

K = 2000 # number of constraints
cnstrs = [sprand(N, N, 0.1) for _ in 1:K] # big constraints matrices

A,B = sprand(K, 0.1), sprand(K, 0.1); # scalar vectors
dims = rand(1:10, 20); # random params
``````

Those random `cnstrs` and `UΟs` have very different characteristics from the originals, but the problem is still visible. Finally running:

``````m, Ξ», P = create_simple_model(Ns)
@time addconstraints!(m, P, Ξ», A, B, cnstrs[1:100], UΟs, dims);
@time addconstraints!(m, P, Ξ», A, B, cnstrs[1:100], UΟs, dims);
``````

This is what i get:

# v0.18.1

`1.986010 seconds (3.64 M allocations: 511.511 MiB, 16.87% gc time)`
`1.650439 seconds (3.64 M allocations: 511.511 MiB)`

# v0.18.4

`4.382071 seconds (8.41 M allocations: 3.282 GiB, 11.17% gc time)`
`3.216149 seconds (8.17 M allocations: 3.270 GiB, 10.39% gc time)`

The slowdown happens on `lhs = @expression...` line.

@miles.lubin, I tried the MWE on Julia 0.7/1.0.3 (with trivial changes). Populating of model is fast:

`@time addconstraints!(m, P, Ξ», A, B, cnstrs[1:100], UΟs, dims); 1.391647 seconds (115.38 k allocations: 385.538 MiB)`
`@time addconstraints!(m, P, Ξ», A, B, cnstrs[1:100], UΟs, dims); 1.457661 seconds (115.38 k allocations: 385.538 MiB)`

however, the build phase takes ages (hundreds? thousands?? of seconds):

``````JuMP.setsolver(m, SCSSolver())
@time JuMP.build(m, traits=JuMP.ProblemTraits(m, relaxation=false))
1527.003029 seconds (1.27 M allocations: 1.496 GiB, 0.02% gc time)
``````

should I file a bug in JuMP?