Thanks, but for this problem Gurobi is very fast than HiGHS but with very far than optimal solutions (compare to the solutions of HiGHS for the single-objective problem). Can you guess anything about this?
Another point that is a subject of the question for me is if HiGHS does not support MIQP what is the results repented after running the code (that is, what problem it wanted try to solve)?
but for this problem Gurobi is very fast than HiGHS but with very far than optimal solutions (compare to the solutions of HiGHS for the single-objective problem)
To re-iterate, HiGHS does not support MIQP. It will not find a solution.
The Gurobi solution will be optimal. If HiGHS is âbetterâ it is because that solution isnât feasible.
is if HiGHS does not support MIQP what is the results repented after running the code
Nothing. Per the log
* Status
Result count : 0
Termination status : OTHER_ERROR
Message from the solver:
"Solve complete. Found 0 solution(s)"
Thanks again. I misexplained my mean! I added the quadratic term as the second objective to my model. However, fastness of Gurobi and best results of HiGHS (non-optimal results of Gurobi) were related to my single-objective mixed-integer linear programming!
I compared the results with those generated by simulations.
using JuMP
using HiGHS
A = 5*rand(1,365)
B = rand(1,365)
T = length(A)
M = 10^6
model = Model();
set_optimizer(model,HiGHS.Optimizer); # test with Gurobi
@variables(model, begin
0 <= S
1 <= R <= 2
y1[1:T], Bin
y2[1:T], Bin
0 <= Y[t = 1:T] <= A[t]
0 <= O[1:T]
0 <= V[0:T]
end);
fix(V[0], 0; force = true)
@expressions(model, begin
Q[t in 1:T], B[t] * R
end)
@constraints(model, begin
[t in 1:T], Y[t] <= V[t-1]
[t in 1:T], Y[t] >= A[t] - M * (1 - y1[t])
[t in 1:T], Y[t] >= V[t-1] - M * y1[t]
[t in 1:T], V[t] <= V[t-1] + Q[t] - O[t] - Y[t]
[t in 1:T], V[t] <= S - Y[t]
[t in 1:T], V[t] >= V[t-1] + Q[t] - O[t] - Y[t] - M * (1 - y2[t])
[t in 1:T], V[t] >= S - Y[t] - M * y2[t]
end);
@objective(model, Max, sum(Y[t] for t in 1:T)/sum(A[t] for t in 1:T))
optimize!(model)
solution_summary(model)
println(objective_value(model))
println("S = ", value.(S), "\n","R = ",value.(R))
but just with my real data. It is expected S has a smaller amount and follows the simulation results, while the Gurobiâs result for this variable (with real data) is so far from what is expected!
My problem with Gurobi is because of the value of the S variable, not the value of the objective function
If they have the same objective function value, then I assume there are multiple possible values that S can take. Why does it need to be small? This just looks like two equivalent solutions to your problem.
As I learned in optimization courses, if we do not have an upper bound for a variable, after solving the problem to the optimality, that variable should take the lowest value that satisfies the problem (keeps it feasible). My point in this regard was that the HiGHS solution is consistent with this fact, but Gorubiâs is not
@odow is correct, and itâs what I suggested in reply to your email. You model has a non-unique solution and, without introducing a tie-breaking term into the objective as Oscar suggested, different optimization solvers will give you different optimal solutions. Indeed, if you change the order in which constraints are defined, the same solver may give a different solution, since numerics will change subtly, and algorithmic ties will be broken differently
Thanks for the response.
Two follow up question (but not related to Julia):
How can one mathematically prove that the proposed model does not have a unique solution for S and, for example, adding a regularized term -0.00001S to the objective function leads to a unique solution for S?
According to the model, do dynamic variables such as Y[t] have unique solutions?
Since my original formula does not have a unique solution for S, I want to justify (using proof) why I added a regularized term to the objective (as a support for my optimization results), considering that S is a continuous variable.