Suppose the function optimizes a model for a given set of integer vectors
function bestworst(pref_to_best::Vector{Int}, pref_to_worst::Vector{Int})
n = length(pref_to_best)
best_index = argmin(pref_to_best)
worst_index = argmin(pref_to_worst)
model = Model(Ipopt.Optimizer)
@variable(model, ε >= 0)
@variable(model, w[1:n] >= 0)
@objective(model, Min, ε)
indices = collect(1:n)
bestindices = indices[indices.!=best_index]
worstindices = indices[indices.!=worst_index]
for i in bestindices
@constraint(model, abs(w[best_index] / w[i] - pref_to_best[i]) <= ε)
end
for i in worstindices
@constraint(model, abs(w[i] / w[worst_index] - pref_to_worst[i]) <= ε)
end
@constraint(model, sum(w) == 1)
optimize!(model)
result = (value(ε), value.(w))
return result
end
I am calling the function with some special values of
pref_to_best = [8, 2, 1]
pref_to_worst = [1, 5, 8]
result = bestworst(pref_to_best, pref_to_worst)
In Ubuntu and MacOS, the optimal \varepsilon is 0.26 and the w vector is [0.071, 0.338, 0.589]
as expected when it’s compared to a reference paper. However, in Windows, the resulted \varepsilon is 0.72 and the w vector is [-1.0000000050256285e-8, 0.6114327292759807, -9.99694897769469e-9]
.
I have only Ubuntu installed machines and I obtained those results in the GitHub CI:
What is the reason of having different and wrong results using Ipopt in Windows?
Note that the implementation is initial and includes some redundant constraints. I will reduce the model in the final implementation.
Thank you in advance.
Edit: Constraints were initially defined using @constraint
, after the failure, I changed them to @NLconstraint
, it seems it is not necessary now.