Hi,

I started with functions to build matrices from blocks / components like

```
function build_A(a::AbstractMatrix, b::AbstractMatrix, c::AbstractVector)
T = promote_type(eltype(a), eltype(b), eltype(c))
x = [a diagm(c)]
return [x b ; x zeros(T, size(b))]
end
```

Now assume that e.g. `c`

are variables we want to investigate or deciasion variables in an optimization problem.

This approached worked to construct fully symbolic matrices with `Symbolics.jl`

:

With the use of `promote_type`

, all zeros, ones etc. can be set to have a common type, so the resulting matrix has elements â€śbelowâ€ť `Any`

. Note that `Symbolics.Num <: Real`

, and `promote_type(Float64, Symbolics.Num) == Symbolics.Num`

.

With JuMP, the story is different. There is (intentionally) no common supertype for fixed values (e.g. `Real`

) and variables. The only common â€śwrapperâ€ť for scalars I found so far is `AffExpr <: ... <: JuMPAbstractScalar`

.

I wrote a small test function to check the memory overhead of this wrapping, see below.

```
function AffExpr_wrap_overhead(T::Type, n::Int)
x = rand(T,n,n)
X = AffExpr.(x)
overhead = Base.summarysize(X) / Base.summarysize(x)
end
AffExpr_wrap_overhead(n) = AffExpr_wrap_overhead(Float64, n)
```

Using `Float64`

, the memory overhead is ca 30x (for a dense matrix as by `rand`

). This is enormous.

(My matrices are structed and sparse, so there may be ways to save a lot of `AffExpr(0.0)`

. Converting the matrix to `SparseArrays.SparseMatrixCSC`

need about 0.4-0.5 memory compared to the dense version.)

The matrix (this is a very small example) then may look something like

```
20Ă—20 Matrix{AffExpr}:
104 -2 0 -2 -100 0 0 â€¦ 0 0 0
-2 104 -2 0 0 -100 0 1 0 0
0 -2 104 -2 0 0 -100 0 1 0
-2 0 -2 104 0 0 0 0 0 1
-100 0 0 0 100 0 0 0 0 0
0 -100 0 0 0 100 0 â€¦ 0 0 0
0 0 -100 0 0 0 100 0 0 0
â‹® â‹® â‹±
0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 â€¦ 0 0 0
0 0 0 0 0 0 0 -c[1] 0 -c[4]
0 0 0 0 0 0 0 c[1] + c[2] -c[2] 0
0 0 0 0 0 0 0 -c[2] c[2] + c[3] -c[3]
0 0 0 0 0 0 0 0 -c[3] c[3] + c[4]
```

Has anyone another idea how to put things together?

I would like to keep the function / matrix design as it is, because there are multiple levels (one matrix is build with others as blocks all the way down to input parameters.)

(As context, Iâ€™m working on a control problem. Descriptions how to build e.g. block matrices are very common in papers.)

(edit: clarification how output looks like)