New to Julia -- EAGO Nonlinear model works when finding Min but not Max?

I’m not sure if a question like this fits more here or on the EAGO GitHub. I’m very new to Julia (started using it three days ago), and I’m also not sure if this is EAGO specific.
In short, I’m trying to learn to use the JuMP optimizer syntax. When I put in a simple test optimization, the minimum of sin(x) or cos(x) for x in [-2, 2], the optimizer works for both (and their negatives), but not for the maximum. For the maximum, I get a KeyError with a lot of in between stuff. The program is simple:

using JuMP
using EAGO

model = Model(EAGO.Optimizer)
@variable(model, x)
@NLconstraint(model, -2 <= x <= 2)
@NLobjective(model, Max, cos(x))
println("model = ", model)
optimize!(model)

I’m using Julia version 1.8.0, JuMP version 0.21.10, and EAGO version 0.6.1. Has anyone else encountered something like this for EAGO or other nonlinear solvers? What’s going on? Should I just negative anything I want the maxima of?
The error text is below. Note that for different functions, a different key number is quoted in the error.

ERROR: LoadError: KeyError: key 2 not found
Stacktrace:
  [1] getindex
    @ ./dict.jl:498 [inlined]
  [2] forward_univariate_tiepnt_2!(k::Int64, op::Int64, child_idx::Int64, setstorage::Vector{MC{1, NS}}, x::Vector{Float64}, lbd::Vector{Float64}, ubd::Vector{Float64}, subgrad_tol::Float64, sparsity::Vector{Int64}, tpdict::Dict{Int64, NTuple{4, Int64}}, tp1storage::Vector{Float64}, tp2storage::Vector{Float64}, tp3storage::Vector{Float64}, tp4storage::Vector{Float64}, is_post::Bool, is_intersect::Bool, is_first_eval::Bool, interval_intersect::Bool, ctx::Cassette.Context{nametype(GuardCtx), EAGO.GuardTracker, Nothing, Cassette.var"##PassType#322", Nothing, Nothing})
    @ EAGO ~/.julia/packages/EAGO/5sz05/src/eago_optimizer/functions/nonlinear/forward_pass.jl:758
  [3] forward_pass_kernel!(nd::Vector{JuMP._Derivatives.NodeData}, adj::SparseArrays.SparseMatrixCSC{Bool, Int64}, x::Vector{Float64}, lbd::Vector{Float64}, ubd::Vector{Float64}, sparsity::Vector{Int64}, setstorage::Vector{MC{1, NS}}, numberstorage::Vector{Float64}, numvalued::Vector{Bool}, tpdict::Dict{Int64, NTuple{4, Int64}}, tp1storage::Vector{Float64}, tp2storage::Vector{Float64}, tp3storage::Vector{Float64}, tp4storage::Vector{Float64}, user_operators::JuMP._Derivatives.UserOperatorRegistry, subexpressions::Vector{EAGO.NonlinearExpression}, func_sparsity::Vector{Int64}, reverse_sparsity::Vector{Int64}, num_mv_buffer::Vector{Float64}, ctx::Cassette.Context{nametype(GuardCtx), EAGO.GuardTracker, Nothing, Cassette.var"##PassType#322", Nothing, Nothing}, is_post::Bool, is_intersect::Bool, is_first_eval::Bool, interval_intersect::Bool, cv_buffer::Vector{Float64}, cc_buffer::Vector{Float64}, treat_x_as_number::Vector{Bool}, subgrad_tol::Float64)
    @ EAGO ~/.julia/packages/EAGO/5sz05/src/eago_optimizer/functions/nonlinear/forward_pass.jl:1062
  [4] forward_pass!(evaluator::EAGO.Evaluator, d::EAGO.NonlinearExpression{MC{1, NS}})
    @ EAGO ~/.julia/packages/EAGO/5sz05/src/eago_optimizer/functions/nonlinear/forward_pass.jl:1098
  [5] forward_pass!(evaluator::EAGO.Evaluator, d::EAGO.BufferedNonlinearFunction{MC{1, NS}})
    @ EAGO ~/.julia/packages/EAGO/5sz05/src/eago_optimizer/functions/nonlinear/forward_pass.jl:1115
  [6] set_constraint_propagation_fbbt!(m::Optimizer)
    @ EAGO ~/.julia/packages/EAGO/5sz05/src/eago_optimizer/domain_reduction.jl:677
  [7] preprocess!(t::EAGO.DefaultExt, m::Optimizer)
    @ EAGO ~/.julia/packages/EAGO/5sz05/src/eago_optimizer/optimize/optimize_nonconvex.jl:595
  [8] preprocess!(m::Optimizer)
    @ EAGO ~/.julia/packages/EAGO/5sz05/src/eago_optimizer/optimize/optimize_nonconvex.jl:1050
  [9] global_solve!(m::Optimizer)
    @ EAGO ~/.julia/packages/EAGO/5sz05/src/eago_optimizer/optimize/optimize_nonconvex.jl:1085
 [10] optimize!(#unused#::Val{EAGO.MINCVX}, m::Optimizer)
    @ EAGO ~/.julia/packages/EAGO/5sz05/src/eago_optimizer/optimize/optimize_nonconvex.jl:1162
 [11] optimize!(m::Optimizer)
    @ EAGO ~/.julia/packages/EAGO/5sz05/src/eago_optimizer/optimize/optimize.jl:37
 [12] optimize!(m::MathOptInterface.Utilities.CachingOptimizer{Optimizer, MathOptInterface.Utilities.UniversalFallback{MathOptInterface.Utilities.GenericModel{Float64, MathOptInterface.Utilities.ModelFunctionConstraints{Float64}}}})
    @ MathOptInterface.Utilities ~/.julia/packages/MathOptInterface/YDdD3/src/Utilities/cachingoptimizer.jl:252
 [13] optimize!(b::MathOptInterface.Bridges.LazyBridgeOptimizer{MathOptInterface.Utilities.CachingOptimizer{Optimizer, MathOptInterface.Utilities.UniversalFallback{MathOptInterface.Utilities.GenericModel{Float64, MathOptInterface.Utilities.ModelFunctionConstraints{Float64}}}}})
    @ MathOptInterface.Bridges ~/.julia/packages/MathOptInterface/YDdD3/src/Bridges/bridge_optimizer.jl:319
 [14] optimize!(m::MathOptInterface.Utilities.CachingOptimizer{MathOptInterface.AbstractOptimizer, MathOptInterface.Utilities.UniversalFallback{MathOptInterface.Utilities.GenericModel{Float64, MathOptInterface.Utilities.ModelFunctionConstraints{Float64}}}})
    @ MathOptInterface.Utilities ~/.julia/packages/MathOptInterface/YDdD3/src/Utilities/cachingoptimizer.jl:252
 [15] optimize!(model::Model, optimizer_factory::Nothing; bridge_constraints::Bool, ignore_optimize_hook::Bool, kwargs::Base.Pairs{Symbol, Union{}, Tuple{}, NamedTuple{(), Tuple{}}})
    @ JuMP ~/.julia/packages/JuMP/klrjG/src/optimizer_interface.jl:185
 [16] optimize! (repeats 2 times)
    @ ~/.julia/packages/JuMP/klrjG/src/optimizer_interface.jl:146 [inlined]
 [17] top-level scope
    @ ~/Documents/Julia/test.jl:12
1 Like

This looks like a bug in EAGO.

You might like to try Juniper and the latest version of JuMP: GitHub - lanl-ansi/Juniper.jl: A JuMP-based Nonlinear Integer Program Solver

But if you dont have integer variables, you should first try Ipopt: GitHub - jump-dev/Ipopt.jl: Julia interface to the Ipopt nonlinear solver

The jump documentation has a bunch of nonlinear tutorials to get you started: Introduction · JuMP

Thanks!

1 Like

No problem :smile:

For future readers, there are a bunch of open issues to update EAGO to the latest version of JuMP: Add support for MOI 0.10 and JuMP 0.22 · Issue #100 · PSORLab/EAGO.jl · GitHub, Upcoming refactoring of JuMP's nonlinear API · Issue #104 · PSORLab/EAGO.jl · GitHub

Hello again, I was wondering if you could help me again here: I have been able to get Ipopt doing somewhat what I need it to do, but it’s a local solver. Graphing Calculator
To make a long story short, I’m trying to minimize the maximum value of a function’s range over an interval ([0, 1] to be particular). In doing so, I use Ipopt to first find the maximum value given a specific set of parameters like in the desmos link above, and then again use Ipopt to minimize that. Unfortunately, Ipopt seems to focus on the smaller corners more often than it finds the actual global result. Are there better methods of finding the global max other than seeding at many different values? My goal is to come away with a generalized function of any amount of input values in [0, 1], so just seeding will get far too slow for more inputs.
I am fine with Ipopt finding local minima on the second run, but I need to have the global extrema for the first step to work.

The code is below.


using JuMP
using Ipopt


function find_diameter(n)
    if n%2 == 0
        return 2.0
    end
    x = (n+n%2)/(2*n)
    x_b = ceil(x*n) # x between: x is this and the next vertex
    x_a = (x*n)%1 # x along: how far is x between them (%)
    return sqrt((1-cos((2*π*x_b-2*π)/n)*(1-x_a)-cos((2*π*x_b)/n)*x_a)^2 + (sin((2*π*x_b-2*π)/n)*(1-x_a)+sin((2*π*x_b)/n)*x_a)^2)
end


function ramp_approx(x, stp, enp, slope, eps=10^-14)
    sta = x-stp
    fin = x-enp
    return slope*((sta/2)*(1+(sta/sqrt(sta^2+eps^2)))-((fin/2)*(1+(fin/sqrt(fin^2+eps^2)))))
end

function param(t, n)
    toReturn_x = 0
    toReturn_y = 0
    for k = 1:1:n
        toReturn_x += ramp_approx(t, (k-1)/n, k/n, n*(cos(π*(n-2*k)/n)-cos(π*(n-2*k+2)/n)))
        toReturn_y += ramp_approx(t, (k-1)/n, k/n, n*(sin(π*(n-2*k)/n)-sin(π*(n-2*k+2)/n)))
    end
    return (toReturn_x, toReturn_y)
end

function find_max(w, x, y)
    model = Model(Ipopt.Optimizer)
    set_optimizer_attribute(model, "tol", 10^-14)
    #set_optimizer_attribute(model, "print_level", 8)
    @variable(model, 0 <= z <= 1)
    register(model, :dist, 3, dist; autodiff = true)
    @NLobjective(model, Max, (dist(z, w, 3)+dist(z, x, 3)+dist(z, y, 3))/3)
    optimize!(model)
    return objective_value(model)
end

function ∇find_max(g::AbstractVector{T}, w, x, y) where {T<:Real}
    h = 2^-20
    g[1] = (find_max(w+h,x,y)-find_max(w-h,x,y))/(2*h)
    g[2] = (find_max(w,x+h,y)-find_max(w,x-h,y))/(2*h)
    g[3] = (find_max(w,x,y+h)-find_max(w,x,y-h))/(2*h)
    return g
end

function magic_number(n)
    toReturn = 0.0
    if n%2 == 0
        for k = 0:n-1
            toReturn += sqrt(1.5+0.5*cos(2*π/n)-cos(2*k*π/n)-cos(2*(k-1)*π/n))/(2*n)
        end
    elseif n%2 == 1
        for k = 0:n-1
            toReturn += sqrt((1.5+0.5*cos(2*π/n)-cos(2*k*π/n)-cos(2*(k-1)*π/n))/(2-2*cos((n-1)*π/n)))/n
        end
    else
        return 0.0
    end
    return toReturn
end

model = Model(Ipopt.Optimizer)
register(model, :find_max, 3, find_max, ∇find_max)
set_optimizer_attribute(model, "tol", 10^-8)
set_optimizer_attribute(model, "max_wall_time", 60.0)
@variable(model, 0 <= w <= 1)
@variable(model, 0 <= x)
@variable(model, 0 <= y)
@NLconstraint(model, x <= w)
@NLconstraint(model, y <= x)
@NLconstraint(model, y <= w)
@NLobjective(model, Min, find_max(w, x, y))
optimize!(model)
println(" ")
println("model = ", model)
@show termination_status(model)
final_w = JuMP.value.(w)
final_x = JuMP.value.(x)
final_y = JuMP.value.(y)
final_output = find_max(final_w, final_x, final_y)

println("final w = ", final_w)
println("final x = ", final_x)
println("final y = ", final_y)
println("final output = ", final_output, ", divided by diameter: ", final_output/find_diameter(3))
println("target output = ", magic_number(3)*find_diameter(3))

I can’t run your code because I don’t know what dist is. Can you provide a reproducible example?

Your gradient computation is likely to run into issues. Because of tolerances, you could end up getting slightly different gradients each time you call it. You really need some way of computing the analytical gradient from the solution of find_max, but how exactly would depend one what dist is.

Sorry about that! I thought I’d included everything (I’m storing the functions in a separate file since they don’t need to change or take up space as I run things).

function dist(a,b,n)
    return ((param(a,n)[1]-param(b,n)[1])^2+(param(a,n)[2]-param(b,n)[2])^2)^0.5
end

I had to supply the gradient in order to have Ipopt optimize the results of another optimisation as seen in find_max, automatic differentiation did not work with that at all. For my purposes, minimizable error is okay, this is for a mathematics project and these computations are only to help me find potential values which I will later prove on paper.
In fact, the comparison with the output of magic_number at the bottom is only there because I’m trying to re-confirm already proven results at this stage.

Thanks for your help!

Oops. I missed the reply. Sorry for the delay.

There’s a couple of style things I would change, like your dist function can be

function dist(a,b,n)
    ap, bp = param(a,n), param(b,n)
    return sqrt((ap[1]-pb[1])^2+(ap[2]-bp[2])^2)
end

But the bigger problem is that 1) You’re asking for a very tight tolerance inn the inner find_max. Ipopt likely can’t find an optimal solution to that tolerance, because when I run it it hits the maximum number of iterations. That’s related to the fact your gradients are finite differences.

Where does the ramp approximation come from? It looks like you’re approximating some function with break points? Is that the true function you’re optimizing over? Stepping back a bit, what is the underlying problem are you trying to solve?

I’m trying to find rendezvous numbers/points for 2-d metric spaces. I’m not sure how accessible the small amount of literature out there is, so to sum it up:
For any compact, connected metric space (X, d), there uniquely exists a number a(X, d) such that for any finite collection of vertices P, there exists another vertex y such that the average of the distance from y to all vertices in P is equal to a(X, d).
As a result, if we examine the average distance as a function F from X → Positive Reals, we can find at least one collection P that maximizes the minimum of the range of F, and another collection Q that minimizes the maximum of the range of F. This is what I’m trying to do, by optimizing this function F, thereby finding the collections P and Q.
As for the dist function, I wrote it that way in order for JuMP to accept it. That’s also the reason for the approximation of the step function.
As of right now, I’m having success with Optim.jl, now I need to tweak parameters and choose the best solvers. I will likely rewrite the parameterization to be more exact again instead of a ramp function approximation since that seems unnecessary with Optim!

Thanks for your help!