I have a problem with 3000×3000 matrix of JuMP variables, and several millions of constraints in those. In the naive formulation the size is what obstructs solving the problem in reasonable time;

I decided to play smart and broken-up the problem to several smaller matrices (`P[i]`

's) and transformations (`U[i]`

's) reducing the number of JuMP variables to ~10000. However now I have a few dozens of 3000×3000 matrices of JuMP’s `GenericAffExpr{Float64,JuMP.Variable}`

to generate and *sum*. Doing this directly takes ridiculous amounts of memory (I’ve run out of at 30G) and time;

I suspect I do something wrong with problem generation; here is the code:

```
m = JuMP.Model();
l = size(U,1)
P = Vector{Any}(l)
for i in 1:len
u = size(U[i],2)
JuMP.@variable(m, P[i][1:u,1:u], SDP)
JuMP.@SDconstraint(m, P[i] >= 0)
end
```

this generates `len`

SDP matrices (`P`

) of the appropriate sizes which I transform using matrices `U`

and assemble to the original one:

```
PP = JuMP.@expression(m, sum(U[k]*P[k]*(U[k]') for k in 1:len))
```

I know this is not the efficient way (? because of matrix multiplication which involves a lot of `+`

between `AffExpr`

, so should I write my own matrix multiplication routines using `AffExpr`

and/or `@expression`

, ala

```
*{T}(A::Array{T, 2}, B::Array{JuMP.Variable,2}) = ...
*{T}(A::Array{JuMP.GenericAffExpr{T,JuMP.Variable}, 2}, B::Array{T,2}) = ...
```

or there is a clever way around this?