# How to efficiently generate constraints

I have a rather large SDP problem which I deconstruct (block diagonalize) to smaller ones; A baby example (~5000×5000) in `JuMP-0.18.01` took

``````215.068401 seconds (124.47 M allocations: 231.684 GiB, 16.09% gc time)
``````

(there are 7229 dense constraints, at some point I spent a lot of time trying to optimize this)

Much to my surprise, upgrading to JuMP-0.18.04 caused this:

``````905.073732 seconds (893.81 M allocations: 1.343 TiB, 38.38% gc time)
``````

below You can find some code but let me vaguely describe it. The problem itself is very simple:

• `P` is a vector of SD blocks (JuMP.Variables)
• `M` is a `Vector{Array{Float64, 2}}` of the same length and same dimensions as `P`
• all linear constraints look like `lhs == x - λ*u`,
• where
• `lhs = sum(vecdot(M[π], P[π]) for π in 1:length(P))` (its a sum of traces of products)
• `x` and `u` are `Float64`s

I suspect that `lhs` thing is not the right way, but I couldn’t find anything in the docs.

So here is the code:

``````Ns = size.(data.Uπs, 2) #Uπs: 5000×n projection matrix
m = JuMP.Model();

P = Vector{Matrix{JuMP.Variable}}(length(Ns))

for (k,n) in enumerate(Ns)
P[k] = JuMP.@variable(m, [i=1:n, j=1:n]) # small semidefinite blocks
JuMP.@SDconstraint(m, P[k] >= 0.0)
end

λ = JuMP.@variable(m, λ)
JuMP.@objective(m, Max, λ)

@time addconstraints!(m,P,λ,X,orderunit, data) # the timing above is from here
``````
``````function addconstraints!(m::JuMP.Model,
P::Vector{Matrix{JuMP.Variable}}, λ::JuMP.Variable,
X::GroupRingElem, orderunit::GroupRingElem, data::OrbitData)

orderunit_orb = orbit_spvector(orderunit.coeffs, data.orbits)
X_orb = orbit_spvector(X.coeffs, data.orbits)
UπsT = [U' for U in data.Uπs]

cnstrs = constraints(parent(X).pm)
orb_cnstr = spzeros(Float64, size(parent(X).pm)...)

M = [Array{Float64}(n,n) for n in size.(UπsT,1)]

for (t, orbit) in enumerate(data.orbits)
orbit_constraint!(orb_cnstr, cnstrs, orbit)
constraintLHS!(M, orb_cnstr, data.Uπs, UπsT, data.dims)

lhs = @expression(m, sum(vecdot(M[π], P[π]) for π in 1:length(P))
x, u = X_orb[t], orderunit_orb[t]
JuMP.@constraint(m, lhs == x - λ*u)
end
end

function constraintLHS!(M, cnstr, Us, Ust, dims)
for π in 1:endof(Us)
M[π] = dims[π].*Ust[π]*cnstr*Us[π]
end
end
``````

Which line got slower when updating JuMP ? It would help if you could give a minimal example so that we could fix the regression.
EDIT: You could time each line with TimerOutputs and post the timing for Julia v0.18.1 and JuMP v0.18.4.

1 Like

I noticed at one point that there were slowdowns on Julia 0.6 for JuMP 0.18.4 because of changes that were made for compatibility with Julia 1.0. We unfortunately don’t have good tools to help control for this. Have you tried running the code on Julia 1.0?

Sorry for the late answer, I finally had time to look into this. I add only the first 200 constraints & use `@timeit` macros:

``````function PropertyT.addconstraints!(m::JuMP.Model,
P::Vector{Matrix{JuMP.Variable}}, λ::JuMP.Variable,
X::GroupRingElem, orderunit::GroupRingElem, data::PropertyT.OrbitData)

orderunit_orb = PropertyT.orbit_spvector(orderunit.coeffs, data.orbits)
X_orb = PropertyT.orbit_spvector(X.coeffs, data.orbits)
UπsT = [U' for U in data.Uπs]

cnstrs = PropertyT.constraints(parent(X).pm)
orb_cnstr = spzeros(Float64, size(parent(X).pm)...)

M = [Array{Float64}(n,n) for n in size.(UπsT,1)]

@timeit to "enumerate loop" for (t, orbit) in enumerate(data.orbits)
t > 200 && break
@timeit to "orbit_constraint" PropertyT.orbit_constraint!(orb_cnstr, cnstrs, orbit)
@timeit to "constraintLHS!" PropertyT.constraintLHS!(M, orb_cnstr, data.Uπs, UπsT, data.dims)

@timeit to "@expression" lhs = @expression(m, sum(vecdot(M[π], P[π]) for π in 1:endof(data.Uπs)))
x, u = X_orb[t], orderunit_orb[t]
@timeit to "@constraint" JuMP.@constraint(m, lhs == x - λ*u)
end
end
``````

on JuMP `v0.18.4` this takes:

``````─────────────────────────────────────────────────────────────────────────────
Time                   Allocations
──────────────────────   ───────────────────────
Tot / % measured:            262s / 1.96%           7.04GiB / 98.8%

Section              ncalls     time   %tot     avg     alloc   %tot      avg
─────────────────────────────────────────────────────────────────────────────
enumerate loop            1    5.14s   100%   5.14s   6.96GiB  100%   6.96GiB
@expression           200    4.84s  94.1%  24.2ms   6.77GiB  97.3%  34.7MiB
constraintLHS!        200    258ms  5.02%  1.29ms    176MiB  2.46%   899KiB
@constraint           200   29.5ms  0.57%   147μs   15.9MiB  0.22%  81.4KiB
orbit_constraint      200   12.7ms  0.25%  63.7μs    163KiB  0.00%        -
─────────────────────────────────────────────────────────────────────────────
``````

while exactly the same code produces on `v0.18.1`

``````────────────────────────────────────────────────────────────────────────────
Time                   Allocations
──────────────────────   ───────────────────────
Tot / % measured:            275s / 0.16%            432MiB / 52.7%

Section              ncalls     time   %tot     avg     alloc   %tot      avg
─────────────────────────────────────────────────────────────────────────────
enumerate loop            1    433ms   100%   433ms    228MiB  100%    228MiB
constraintLHS!        200    242ms  56.0%  1.21ms    176MiB  77.1%   899KiB
orbit_constraint      200    105ms  24.3%   525μs    425KiB  0.18%  2.12KiB
@expression           200   81.7ms  18.9%   408μs   48.5MiB  21.3%   248KiB
@constraint           200   3.42ms  0.79%  17.1μs   3.11MiB  1.36%  15.9KiB
─────────────────────────────────────────────────────────────────────────────
``````

@blegat As for MWE is it ok to handle the jld with the exact data?

Ok, here’s the (not so) M, but WE:

``````function clamp_small!(M::AbstractArray{T}, eps=eps(T)) where T
for n in eachindex(M)
if abs(M[n]) < eps
M[n] = zero(T)
end
end
return M
end

function constraintLHS!(M, cnstr, Us, Ust, dims, eps=1e3*eps(eltype(first(M))))
for π in 1:endof(Us)
M[π] = dims[π].*clamp_small!(Ust[π]*cnstr*Us[π], eps)
end
end

function create_simple_model(Ns)
m = JuMP.Model()

λ = JuMP.@variable(m, λ>=0.0)

P = Vector{Matrix{JuMP.Variable}}(length(Ns))

for (k,s) in enumerate(Ns)
P[k] = JuMP.@variable(m, [i=1:s, j=1:s])
JuMP.@SDconstraint(m, P[k] >= 0.0)
end
JuMP.@objective(m, Max, λ)
return m, λ, P
end

function addconstraints!(m, P, λ, A, B, cnstrs, Uπs, dims)
# preallocation
N = size(first(Uπs), 1)
Ns = size.(Uπs, 2)
sp_cnstr = spzeros(Float64, N, N)
M = [Array{Float64}(n,n) for n in Ns];
UπsT = [U' for U in Uπs]

for t in 1:length(cnstrs)
constraintLHS!(M, cnstrs[t], Uπs, UπsT, dims)
lhs = @expression(m, sum(vecdot(M[π], P[π]) for π in 1:size(Uπs,1)));
a, b = A[t], B[t]
JuMP.@constraint(m, lhs == a - λ*b);
end
end
``````

Generating simple input data:

``````srand(0)
Ns = rand(1:30, 20) # sizes of SD blocks
N = 500 # size of big constraints : 500×500
Uπs = [sprand(N, n, 0.1) for n in Ns] # originally: rank one-projection matrices

K = 2000 # number of constraints
cnstrs = [sprand(N, N, 0.1) for _ in 1:K] # big constraints matrices

A,B = sprand(K, 0.1), sprand(K, 0.1); # scalar vectors
dims = rand(1:10, 20); # random params
``````

Those random `cnstrs` and `Uπs` have very different characteristics from the originals, but the problem is still visible. Finally running:

``````m, λ, P = create_simple_model(Ns)
@time addconstraints!(m, P, λ, A, B, cnstrs[1:100], Uπs, dims);
@time addconstraints!(m, P, λ, A, B, cnstrs[1:100], Uπs, dims);
``````

This is what i get:

# v0.18.1

`1.986010 seconds (3.64 M allocations: 511.511 MiB, 16.87% gc time)`
`1.650439 seconds (3.64 M allocations: 511.511 MiB)`

# v0.18.4

`4.382071 seconds (8.41 M allocations: 3.282 GiB, 11.17% gc time)`
`3.216149 seconds (8.17 M allocations: 3.270 GiB, 10.39% gc time)`

The slowdown happens on `lhs = @expression...` line.

@miles.lubin, I tried the MWE on Julia 0.7/1.0.3 (with trivial changes). Populating of model is fast:

`@time addconstraints!(m, P, λ, A, B, cnstrs[1:100], Uπs, dims); 1.391647 seconds (115.38 k allocations: 385.538 MiB)`
`@time addconstraints!(m, P, λ, A, B, cnstrs[1:100], Uπs, dims); 1.457661 seconds (115.38 k allocations: 385.538 MiB)`

however, the build phase takes ages (hundreds? thousands?? of seconds):

``````JuMP.setsolver(m, SCSSolver())
@time JuMP.build(m, traits=JuMP.ProblemTraits(m, relaxation=false))
1527.003029 seconds (1.27 M allocations: 1.496 GiB, 0.02% gc time)
``````

should I file a bug in JuMP?