Maximizing a term with barrier in JuMP

I’m sorry that the title isn’t clear enough; I promise to provide a suitable description with a minimal reproducible example.
I want to optimize the following model to achieve the optimal set of w_j (which is a vector):

  • X_{i,j}^N is a Matrix. (is known, and N stands for Normalized. It means, X_{i,j}^N is normalized version of X_{i,j}; so, N is not an operator.)
  • w is a vector (we are looking for w in this case) (unknown)
  • \sigma^N is a vector (is known, and N stands for normalized)
  • \pi_j^N is a vector (is known, and N stands for normalized)
  • \epsilon=10^{-3}
  • \beta is a floating point scaler (is known)
  • n is the number of rows of X_{i,j}^N matrix (it’s 10 in the following example)
  • m is the number of columns of X_{i,j}^N matrix (it’s 7 in the following example)

I have a problem with the \lambda_a term. I don’t know how to bring it into the model in JuMP. The rest of it can be coded as the following:

using Statistics
using JuMP
using Ipopt

# This is the $X_{i,j}^N$
mat = [1.0     0.9925  0.9115  0.8     0.9401  1.0     0.9449;
0.8696  0.8271  0.8462  1.0     0.9181  0.978   1.0;
0.7391  0.8684  0.7615  0.2667  0.8177  0.8241  0.8305;
0.5217  0.7895  0.6654  0.2     0.8051  0.7236  0.8061;
0.6522  0.9135  0.7692  0.2857  0.8396  0.7063  0.8812;
0.6087  0.8346  0.7269  0.3077  0.8722  0.6742  0.8926;
0.913   0.985   0.9346  0.6667  0.9813  0.8641  0.9216;
0.8696  0.9624  1.0     0.5714  0.9632  0.7807  0.9751;
0.8261  1.0     0.8077  0.6667  1.0     0.7946  0.9104;
0.3478  0.8195  0.7462  0.3636  0.8263  0.6642  0.814;]

σⱼ = map(eachcol(mat)) do col
  std(col)
end;

σᴺ::Vector = σⱼ ./ sum(σⱼ);

r::Matrix = cor(mat, dims=1);

πⱼ::Vector = vec(sum(-r.+1, dims=1));

πᴺ::Vector = πⱼ ./ sum(πⱼ);

model = Model(Ipopt.Optimizer)

# 7 is the number of columns in `mat` 
@variables(model, begin
  10^-3 <= w[i=1:7] <= 1
end)

λb = sum((w[i]*σᴺ[i])^2 for i=1:7)
λc = sum((w[i]*πᴺ[i])^2 for i=1:7)

β = 0.8

@NLobjective(model, Max, -β*(λb + λc))
@constraint(model, sum(w[i] for i=1:7) == 1)
optimize!(model)
value.(w)

returns

7-element Vector{Float64}:
 0.08741654756858754
 0.12115074422158378
 0.17044973907057392
 0.05020741942136967
 0.22797953668866455
 0.1103315480840218
 0.23246446494519868

julia> sum(value.(w))
1.0

So it’s all okay, except for than \lambda_a term (actually, where we have \lambda_a \le S_i for i\in \{1, 2, ..., n\}). How should I code that?

Update

The S_i can be written as the following:

julia> Sᵢ = sum(w[i]*mat[:, i] for i=1:7)
10-element Vector{AffExpr}:
 w[1] + 0.9925 w[2] + 0.9115 w[3] + 0.8 w[4] + 0.9401 w[5] + w[6] + 0.9449 w[7]
 0.8696 w[1] + 0.8271 w[2] + 0.8462 w[3] + w[4] + 0.9181 w[5] + 0.978 w[6] + w[7]
 0.7391 w[1] + 0.8684 w[2] + 0.7615 w[3] + 0.2667 w[4] + 0.8177 w[5] + 0.8241 w[6] + 0.8305 w[7]
 0.5217 w[1] + 0.7895 w[2] + 0.6654 w[3] + 0.2 w[4] + 0.8051 w[5] + 0.7236 w[6] + 0.8061 w[7]
 0.6522 w[1] + 0.9135 w[2] + 0.7692 w[3] + 0.2857 w[4] + 0.8396 w[5] + 0.7063 w[6] + 0.8812 w[7]
 0.6087 w[1] + 0.8346 w[2] + 0.7269 w[3] + 0.3077 w[4] + 0.8722 w[5] + 0.6742 w[6] + 0.8926 w[7]
 0.913 w[1] + 0.985 w[2] + 0.9346 w[3] + 0.6667 w[4] + 0.9813 w[5] + 0.8641 w[6] + 0.9216 w[7]
 0.8696 w[1] + 0.9624 w[2] + w[3] + 0.5714 w[4] + 0.9632 w[5] + 0.7807 w[6] + 0.9751 w[7]
 0.8261 w[1] + w[2] + 0.8077 w[3] + 0.6667 w[4] + w[5] + 0.7946 w[6] + 0.9104 w[7]
 0.3478 w[1] + 0.8195 w[2] + 0.7462 w[3] + 0.3636 w[4] + 0.8263 w[5] + 0.6642 w[6] + 0.814 w[7]

# or even

julia> Sᵢⱼ = w.*mat'
7×10 Matrix{AffExpr}:
 w[1]         0.8696 w[1]  0.7391 w[1]  0.5217 w[1]  …  0.913 w[1]   0.8696 w[1]  0.8261 w[1]  0.3478 w[1]     
 0.9925 w[2]  0.8271 w[2]  0.8684 w[2]  0.7895 w[2]     0.985 w[2]   0.9624 w[2]  w[2]         0.8195 w[2]
 0.9115 w[3]  0.8462 w[3]  0.7615 w[3]  0.6654 w[3]     0.9346 w[3]  w[3]         0.8077 w[3]  0.7462 w[3]     
 0.8 w[4]     w[4]         0.2667 w[4]  0.2 w[4]        0.6667 w[4]  0.5714 w[4]  0.6667 w[4]  0.3636 w[4]     
 0.9401 w[5]  0.9181 w[5]  0.8177 w[5]  0.8051 w[5]     0.9813 w[5]  0.9632 w[5]  w[5]         0.8263 w[5]     
 w[6]         0.978 w[6]   0.8241 w[6]  0.7236 w[6]  …  0.8641 w[6]  0.7807 w[6]  0.7946 w[6]  0.6642 w[6]     
 0.9449 w[7]  w[7]         0.8305 w[7]  0.8061 w[7]     0.9216 w[7]  0.9751 w[7]  0.9104 w[7]  0.814 w[7] 

But still, I don’t know how to incorporate this in the model by considering \lambda_a \le S_i for i\in \{1, 2, ..., n\}. I think it needs a multi-objective optimizer since we want to maximize the minimum of S.

I tried an approach. What if we define an auxiliary variable that represents \lambda_a?

Sᵢ = sum(w[i]*mat[:, i] for i=1:7)
# auxi is literally λa
@variable(model, auxi)
@NLobjective(model, Max, auxi-β*(λb + λc))
@constraint(model, auxi.<=Sᵢ)

This works:

optimize!(model)
sum(value.(w))
# 1.0

But I’m not sure if it’s the correct approach. Can anyone verify it?

1 Like

I think you’re close. Here is how I would write your problem shown in the formulation:

using Statistics
using JuMP
using Ipopt

## Data:

# This is the $X_{i,j}^N$
X = [
1.0     0.9925  0.9115  0.8     0.9401  1.0     0.9449;
0.8696  0.8271  0.8462  1.0     0.9181  0.978   1.0;
0.7391  0.8684  0.7615  0.2667  0.8177  0.8241  0.8305;
0.5217  0.7895  0.6654  0.2     0.8051  0.7236  0.8061;
0.6522  0.9135  0.7692  0.2857  0.8396  0.7063  0.8812;
0.6087  0.8346  0.7269  0.3077  0.8722  0.6742  0.8926;
0.913   0.985   0.9346  0.6667  0.9813  0.8641  0.9216;
0.8696  0.9624  1.0     0.5714  0.9632  0.7807  0.9751;
0.8261  1.0     0.8077  0.6667  1.0     0.7946  0.9104;
0.3478  0.8195  0.7462  0.3636  0.8263  0.6642  0.814;
]

σ = map(eachcol(X)) do col
  std(col)
end;

σᴺ = σ ./ sum(σ);

r = cor(X, dims=1);

π = vec(sum(-r.+1, dims=1));

πᴺ = π ./ sum(π);

ϵ = 1e-3

β = 0.8

n, m = size(X)

## Model:

model = Model(Ipopt.Optimizer)

# m is the number of columns in `X` 
@variable(model, ϵ <= w[j=1:m] <= 1)

@variable(model, λa)
@variable(model, λb >= 0)
@variable(model, λc >= 0)

@variable(model, S[i=1:n])

@objective(model, Max, λa - β*(λb + λc))

@constraint(model,[i=1:n], λa <= S[i])

@constraint(model,[i=1:n],
    S[i] == sum((w[j]*X[i,j])^2 for j=1:m))

@constraint(model, 
    λb == sum((w[j]*σᴺ[j])^2 for j=1:m))
    
@constraint(model,
    λc == sum((w[j]*πᴺ[j])^2 for j=1:m))

@constraint(model, sum(w[j] for j=1:m) == 1)

## Solve:

optimize!(model)
value.(w)

I get

julia> value.(w)
7-element Vector{Float64}:
 0.0009999917810873554
 0.9940000493154578
 0.0009999917807129526
 0.000999991781468847
 0.0009999917803320373
 0.0009999917806158983
 0.0009999917803249852

since there is the lower bound ϵ = 1e-3 for the w. It might be better to just have a lower bound of 0 and work with the solver tolerances.

By the way, what is the source/reference of the formulation?

2 Likes

Hi, Thank you so much. Unfortunately, I can’t confirm your approach because of the defined constraints for λb, λc, and S.
DOI to the paper: Simultaneous Evaluation of Criteria and Alternatives (SECA) for Multi-Criteria Decision-Making

1 Like

I’m not sure what you mean. The constraints for these are defined in 8.3-8.5.

Can you run the code I posted in full?

Thank you for your response :heartpulse:. I meant, for example, in the above, you delimited the λc, but we don’t have such a constraint in the model. Yes, I ran it successfully. The result was the same as yours.

1 Like

I see. The >= 0 bounds I added since the right hand side is a sum of squares, but they are not needed and can be removed without changing the problem.

1 Like

Thank you so much for your help and kind attitude. I wrote a refactored version inspired by how you wrote constraints for λb and λc. Thank you so much.

1 Like