 # What is the best tool to use for weighting matrices

I have a problem that involves combining several matrices with a weighted average to optimize for a given feature, such as the average value of over the data in the resulting matrix.

The setup is essentially like this.

``````weights =(1,2,8)

matrixA =[1 2
5 8]

matrixB =[5 7
9 0]

matrixC =[4 5
7 8]

averageMatrix = (matrixA*weights + matrixB*weights + matrixC*weights)/3

matrixAveragePre = accumulate(+, averageMatrix[i,j] for i in 1:2, j in 1:2)

matrixAveragePost = matrixAveragePre/4
``````

With these dummy values the output is the expected:

20.833333333333332

an output which I would like to optimize by adjusting the weights to produce either a specific number such as, say, 30 or to maximize.

This is a fairly straightforward thing to do using Excel’s built-in Solver tool, but I am not sure what the most convenient equivalent here is. I have looked and JuMP and Optim, but they don’t seem to be solving this sort of problem, unless I am mistaken?

This is easily done in both Optim and JuMP, just define a function that takes `weights` as an input and (in your example) returns `(30 - weights*matrices)^2`, and then minimize this function.

1 Like

Thanks, that gets me in the door, but now I’ve set it up as follows:

``````function f(weights)

(30 - weights*matrixAveragePost)^2

end

Optim.minimum(f, weights)
``````

which yields: 620.8402777777778, which isn’t much use.

`show(weights)` returns the initial [1.0, 2.0, 8.0]

Am I doing something wrong here? I need the adjusted weights returned.

This is easy to do in JuMP:

``````using JuMP, Ipopt
matrices = [
[1 2; 5 8],
[5 7; 9 0],
[4 5; 7 8],
]
model = Model(Ipopt.Optimizer)
@variable(model, 0 <= weights[1:length(matrices)])
@expression(model, X, sum(weights[i] * matrices[i] for i in 1:length(matrices)))
@objective(model, Min, (30 - X[2, 2])^2)
optimize!(model)
value.(weights)
value.(X)
``````
1 Like

Thanks, this seems to be doing the right thing when I run it in my program. I will need to figure the syntax out a bit, especially how you got the formula for @objective, but I’ll scratch my head on it a bit. At the moment I am just using the the length of the actual matrices rather than the 2’s there.

Number of nonzeros in equality constraint Jacobian…: 0
Number of nonzeros in inequality constraint Jacobian.: 0
Number of nonzeros in Lagrangian Hessian…: 0

Total number of variables…: 5
variables with only lower bounds: 5
variables with lower and upper bounds: 0
variables with only upper bounds: 0
Total number of equality constraints…: 0
Total number of inequality constraints…: 0
inequality constraints with only lower bounds: 0
inequality constraints with lower and upper bounds: 0
inequality constraints with only upper bounds: 0

iter objective inf_pr inf_du lg(mu) ||d|| lg(rg) alpha_du alpha_pr ls
0 1.0000000e+00 0.00e+00 1.00e+00 -1.0 0.00e+00 - 0.00e+00 0.00e+00 0
1 1.0000000e+00 0.00e+00 1.00e-02 -1.7 2.00e-02 - 9.90e-01 1.00e+00f 1
2 1.0000000e+00 0.00e+00 1.50e-06 -3.8 1.50e-02 - 1.00e+00 1.00e+00f 1
3 1.0000000e+00 0.00e+00 2.51e-14 -8.6 1.67e-03 - 1.00e+00 1.00e+00f 1

Number of Iterations…: 3

``````                               (scaled)                 (unscaled)
``````

Objective…: 1.0000000000000000e+00 1.0000000000000000e+00
Dual infeasibility…: 2.5059035640133008e-14 2.5059035640133008e-14
Constraint violation…: 0.0000000000000000e+00 0.0000000000000000e+00
Complementarity…: 1.1704648838917575e-15 1.1704648838917575e-15
Overall NLP error…: 2.5059035640133008e-14 2.5059035640133008e-14

Number of objective function evaluations = 4
Number of objective gradient evaluations = 4
Number of equality constraint evaluations = 0
Number of inequality constraint evaluations = 0
Number of equality constraint Jacobian evaluations = 0
Number of inequality constraint Jacobian evaluations = 0
Number of Lagrangian Hessian evaluations = 1
Total CPU secs in IPOPT (w/o function evaluations) = 0.053
Total CPU secs in NLP function evaluations = 0.000

EXIT: Optimal Solution Found.

However, when I `println(weights)`, it gives me:

VariableRef[weights, weights, weights, weights, weights]

The main thing is that I need to go back and actually apply those weights to the input matrices.

In general, if you think these packages are the right ones for the job I’ll have to spend some time figuring them out.

`println(weights)`

Try the last two lines of my code.

especially how you got the formula

It’s the squared difference between what you have and what you want. We want to minimize that.

1 Like

Try the last two lines of my code.

Oh, I understand now.

It didn’t occur to me to modify `value.(weights)` to assign it to a variable to use it subsequently. I’m still not used to needing to explicitly calling outputs like that, sorry.

But the result doesn’t seem to be what I’m expecting. I’m getting:

[1.8750000000249656, 264.0458596788466, 1.875000000025152]

whereas GRG nonlinear in Excel Solver is giving me the correct weights of:

[7.604777427, 2.606567535, 3.677612558]

1 Like

You can check a proposed solution for weights by using your code in your original post. A function that does this is

``````function check_answer(matrixArray, weightArray)
L = length(matrixArray)
m, n = size(matrixArray)
averageMatrix = sum(matrixArray[i]*weightArray[i] for i in 1:L)/L

matrixAveragePre = accumulate(+, averageMatrix[i,j] for i in 1:m, j in 1:n)
matrixAveragePost = matrixAveragePre[m,n]/(m*n)

return matrixAveragePost
end
``````

If I have

``````matrices = [
[1 2; 5 8],
[5 7; 9 0],
[4 5; 7 8],
]
``````

and `wA = [1,2,8]` I get `check_answer(matrices, wA ) = 20.833`… as expected.
If I use the answer `wA = [7.604777427, 2.606567535, 3.677612558]`, then I get
`check_answer(matrices, wA ) = 22.0564`… so not really an answer if wanting to approximate 30.

Using `wA = [1.8750000000249656, 264.0458596788466, 1.875000000025152]` I get 468.330… but this is because Oscar’s answer is solving a slightly different problem. Your problem is more like

``````using JuMP, Ipopt
model = Model(Ipopt.Optimizer)

matrices = [
[1 2; 5 8],
[5 7; 9 0],
[4 5; 7 8],
]

m, n = size(matrices)

@variable(model, 0 <= weights[1:length(matrices)])
@expression(model, X, sum(weights[i] * matrices[i] for i in 1:length(matrices)))
## New: The next line does the same as the "matrixAveragePre = accumulate(+, ..." line
@expression(model, Y, sum(X[i,j] for i in 1:m for j in 1:n))
## total objective with averaging denominator 3 and 4 respectively dividing Y:
@objective(model, Min, (30 - Y/(3*4))^2)

optimize!(model)
value.(weights)
``````

Typing `Y` in the REPL returns `16 weights + 21 weights + 24 weights`, and a solution of weights this expression approximately equals 30*3*4 = 360.

With this in mind, it should be clear that the problem does not have a unique solution, e.g.
`check_answer(matrices, [1.25906 2.06343 12.3551] ) = 29.999...`
`check_answer(matrices, [ 11.0037 3.51295 4.5904 ] ) = 30.000...`

2 Likes

That works!

I think I understand all the individual bits of what you did there well enough to implement it but the syntax is is going to take me a bit longer to unwrap.

The solution doesn’t have to be unique for what I’m doing, but I’ve found that I could herd GRG solver in the right direction by searching for intermediate solutions. That makes it less likely “cheat” to zeroing out some of the weights in my implementation. Perhaps something like that will work here too.

In any event: Thanks to both of you again.

1 Like