Ipopt produces wrong results in Windows

Suppose the function optimizes a model for a given set of integer vectors

function bestworst(pref_to_best::Vector{Int}, pref_to_worst::Vector{Int})

    n = length(pref_to_best)

    best_index = argmin(pref_to_best)

    worst_index = argmin(pref_to_worst)

    model = Model(Ipopt.Optimizer)

    @variable(model, ε >= 0)

    @variable(model, w[1:n] >= 0)

    @objective(model, Min, ε)

    indices = collect(1:n)

    bestindices = indices[indices.!=best_index]
    
    worstindices = indices[indices.!=worst_index]

    for i in bestindices
        @constraint(model, abs(w[best_index] / w[i] - pref_to_best[i]) <= ε)
    end

    for i in worstindices
        @constraint(model, abs(w[i] / w[worst_index] - pref_to_worst[i]) <= ε)
    end

    @constraint(model, sum(w) == 1)

    optimize!(model)

    result = (value(ε), value.(w))

    return result
end

I am calling the function with some special values of

pref_to_best = [8, 2, 1]
pref_to_worst = [1, 5, 8]
result = bestworst(pref_to_best, pref_to_worst)

In Ubuntu and MacOS, the optimal \varepsilon is 0.26 and the w vector is [0.071, 0.338, 0.589] as expected when it’s compared to a reference paper. However, in Windows, the resulted \varepsilon is 0.72 and the w vector is [-1.0000000050256285e-8, 0.6114327292759807, -9.99694897769469e-9].

I have only Ubuntu installed machines and I obtained those results in the GitHub CI:

What is the reason of having different and wrong results using Ipopt in Windows?

Note that the implementation is initial and includes some redundant constraints. I will reduce the model in the final implementation.

Thank you in advance.

Edit: Constraints were initially defined using @constraint , after the failure, I changed them to @NLconstraint, it seems it is not necessary now.

I have a few comments:

  • You must always, always check whether the solver terminated with a solution before querying value. Use is_solved_and_feasible(model) for most common cases, and check termination_status(model) or solution_summary(model) for details if it does not solve correctly.
  • Ipopt assumes that the functions are twice differentiable. Since we provide only callback oracles, it cannot “see” that you have used abs, so the algorithm will happily step back-and-forth across the non-differentiable point. I don’t have Windows to confirm, but I assume that you are hitting an iteration limit and Ipopt is returning whatever the solution of the last iteration was.
  • You have 1 / w[i], which is not defined at your starting point of w[i] = 0. Set a non-default starting point (e.g., 1 / n):

Here’s how I would write your code (I didn’t run it to test, be wary typos.)

function bestworst(pref_to_best::Vector{Int}, pref_to_worst::Vector{Int})
    n = length(pref_to_best)
    best_index = argmin(pref_to_best)
    worst_index = argmin(pref_to_worst)
    indices = collect(1:n)
    bestindices = indices[indices.!=best_index]
    worstindices = indices[indices.!=worst_index]
    model = Model(Ipopt.Optimizer)
    @variable(model, ε >= 0)
    @variable(model, w[1:n] >= 0, start = 1 / n)
    @objective(model, Min, ε)
    for i in bestindices
        @constraint(model, -ε <= w[best_index] / w[i] - pref_to_best[i] <= ε)
    end
    for i in worstindices
        @constraint(model, -ε <= w[i] / w[worst_index] - pref_to_worst[i] <= ε)
    end
    @constraint(model, sum(w) == 1)
    optimize!(model)
    @assert is_solved_and_feasible(model)
    return value(ε), value.(w)
end
3 Likes

Hi @odow

Thank you for your detailed review and answer. Setting the initial values of the decision variables in the feasible region mainly solves the problem.

Additionally, I had to change the constraints

@constraint(model, abs(w[i] / w[worst_index] - pref_to_worst[i]) <= ε)

to

@constraint(model, w[i] / w[worst_index] - pref_to_worst[i] <= ε)
@constraint(model, -ε <= w[i] / w[worst_index] - pref_to_worst[i])

just because

@constraint(model, -ε <= w[i] / w[worst_index] - pref_to_worst[i] <= ε)

is not allowed.

By the way, it is still an issue that the initial implementation was working in Ubuntu and MacOS successfully, however it was failing in Windows. But at the end, it is not an issue in my case now.

Thank you!

Do you have a log of the Windows failure? What happens? What is the termination status?

Only the information that I have is the CI output of GitHub actions:

Perhaps I didn’t understand. Did the new version fail? Or is your question still why did the old version fail only on Windows?

In which case, the answer is luck/numerical differences. The problem violates the assumptions of Ipopt, so it isn’t guaranteed to converge to a local minimum.

1 Like

No, the problem is solved by introducing a good and a feasible start for the decision variables.

Yes, the previous problem was solving in each start in Ubuntu and MacOS but not in Windows. I understand it is about luck, but it is still unclear, why it was consistently solving in Ubuntu and MacOS, but not in Windows. Maybe, it is because of a predefined seed of random numbers?

the new feature works like a charm thank to your comments:

1 Like