How to get evenly distributed values from Highs

Hello Everyone,
I am working on a linear programming problem. I want to solve variables for the constraints.

The issue is I am able to get feasible values for all my variables from HiGHs for my problem, but these values tends= to be extreme, i.e. it tries to give me the lowerBound or upperBound as the solution. For my case out of 5273 variables, I am getting 3938 instances of lowerBound and 207 instances of upperBound. I would like to get as many evenly distributed values as possible(In a range of [1, 10000], instead of the values being (1, 10000) I would like to get (2343, 6542, etc)). The sum of variables is conserved also, changes to objective fuction can be made to achieve this outcome.
Below is the model description

JuMP Model
Feasibility problem with:
Variables: 5273
AffExpr-in-MathOptInterface.EqualTo{Float64}: 46 constraints
AffExpr-in-MathOptInterface.Interval{Float64}: 400 constraints
VariableRef-in-MathOptInterface.GreaterThan{Float64}: 5273 constraints
VariableRef-in-MathOptInterface.LessThan{Float64}: 5273 constraints
VariableRef-in-MathOptInterface.Integer: 5273 constraints
Model mode: AUTOMATIC
CachingOptimizer state: EMPTY_OPTIMIZER
Solver name: HiGHS

For each variable I have a consistent lower bound and upper bound say [1, 10000].
I have constraints on the lines of -

y[18] + y[29] + y[54] + y[117] + y[202] + y[425] + y[431] + y[455] + y[543] + y[1096] + y[1105] + y[1139] + y[1147] + y[2350] == 100000


17737.19 y[24] + 2592.54 y[25] + 3024.63 y[26] + 7592.43 y[27] + 9382.52 y[28] in [6.2670194e6, 6.2826194e6]

And my objective function is

Maximize (17737.19 y[1] + 2592.54 y[2] + 3024.63 y[3] … + 7592.43 y[5273])

This is just to optimize the solution a bit more, our objective is to just get a feasible solution. So, if required this can changed.

Make sure that you are seeking a result that is theoretically possible. Whenever you add sum constraints or try to minimize the L1-norm of a vector variable, you tend to get sparse solutions. This is a theoretical result.

This result is widely used in statistics to “kill” the contribution of not-so-important variables, and only retain the most important ones. LASSO regression is an example.

HiGHS uses a simplex algorithm by default (I believe), and simplex returns so-called “basic” solutions where, by design, several variables will be at their lower or upper bound.
In general, if you have m equality constraints and n variables, in a simplex solution, you will get m-n variables at their lower or upper bound.

To avoid this behavior in the returned solution, you’ll need to break that intrinsic property of simplex.
I know of two ways:

  • use an interior-point algorithm and disable crossover. This will give better balanced solutions, if multiple optima exist. I do not know whether HiGHS supports doing that though
  • add a (small) quadratic term to the objective that penalizes the sum of the squared variables. This tends to produce solutions like what you’re looking for, but it makes the problem quadratic. HiGHS should support it, but it may be slower

I tried this by minimizing the sum of squares of variables but this it gives error

Running HiGHS 1.7.0 (git hash: 50670fd4c): Copyright (c) 2024 HiGHS under MIT licence terms
Coefficient ranges:
Matrix [1e+00, 2e+04]
Cost [0e+00, 0e+00]
Bound [1e+00, 1e+04]
RHS [7e+03, 8e+07]
ERROR: Cannot solve MIQP problems with HiGHS

This might be because all of my constraints are linear.

HiGHS cannot solve a mixed-integer problem with a quadratic objective. Use a solver like Gurobi.jl instead.

Can you provide a reproducible example for your problem? Solvers will find a solution. If you want a particular structure, like evenly spaced x values, then you should encode this as constraints or somehow specify it via the objective. If multiple solutions are possible, solvers provide very little control over which solution will be returned.

This is not a one to one of my problem, but shows the behavior well

using JuMP
using HiGHS

lb_t = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1]
ub_t = [100, 100, 100, 100, 100, 100, 100, 100, 100, 100]
len_t = length(lb_t)

M_t = Model()
set_optimizer(M_t, HiGHS.Optimizer)

@variable(M_t, lb_t[i] <= z[i=1:len_t] <= ub_t[i], Int)
@constraint(M_t, sum(z) <= 500)

@objective(M_t, Max, sum(z))


answer =
for i in 1:len_t
push!(answer, value(z[i]))

@show answer;

Below is the output

Running HiGHS 1.7.0 (git hash: 50670fd4c): Copyright (c) 2024 HiGHS under MIT licence terms
Coefficient ranges:
Matrix [1e+00, 1e+00]
Cost [1e+00, 1e+00]
Bound [1e+00, 1e+02]
RHS [5e+02, 5e+02]
Presolving model
1 rows, 10 cols, 10 nonzeros 0s
0 rows, 0 cols, 0 nonzeros 0s
0 rows, 0 cols, 0 nonzeros 0s
Presolve: Optimal

Solving report
Status Optimal
Primal bound 500
Dual bound 500
Gap 0% (tolerance: 0.01%)
Solution status feasible
500 (objective)
0 (bound viol.)
0 (int. viol.)
0 (row viol.)
Timing 0.00 (total)
0.00 (presolve)
0.00 (postsolve)
Nodes 0
LP iterations 0 (total)
0 (strong br.)
0 (separation)
0 (heuristics)
answer = Any[1.0, 1.0, 1.0, 1.0, 1.0, 95.0, 100.0, 100.0, 100.0, 100.0]

Sure. So to get a different solution you either need to add constraints, or you need to use a different objective value.

What type of solution do you want? One with “evenly” spaced x? Maximize the minimim distance between two points? Away from bounds, so every value at 50?

Yes, So I want my variables to be away from bounds. Say I have x[1:20] variable with [0.01, 10] and along with other relevant constraint, one of the costraint will be sum(x) == 100. So, I would like my variables to be close to 5.
Now, one way I tried to a achieve this was by manually increasing the lb, and decreasing the hb for all variables, but this didn’t gave me as good of a solution.

You could do a multi-objective approach:

julia> using JuMP

julia> import HiGHS

julia> import MultiObjectiveAlgorithms as MOA

julia> lb = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1];

julia> ub = [100, 100, 100, 100, 100, 100, 100, 100, 100, 100];

julia> model = Model(() -> MOA.Optimizer(HiGHS.Optimizer));

julia> set_silent(model)

julia> set_attribute(model, MOA.Algorithm(), MOA.Hierarchical())

julia> @variables(model, begin
           lb[i] <= z[i in 1:length(lb)] <= ub[i], Int
           d >= 0

julia> @constraints(model, begin
           z .>= lb .+ d
           z .<= ub .- d
           sum(z) <= 500

julia> @objective(model, Max, [sum(z), d])
2-element Vector{AffExpr}:
 z[1] + z[2] + z[3] + z[4] + z[5] + z[6] + z[7] + z[8] + z[9] + z[10]

julia> optimize!(model)

julia> answer = value.(z)
10-element Vector{Float64}:

julia> value(d)

Thanks for the logic @odow. This helped me alot :grinning:

1 Like