Best Nonlinear Optimizer for a Continuous Convex Problem

I need to make sensitivity analysis of the maximum to parameters that define the conic constraint.

Hence, I would like to know how can I use (if I can) the duals that I obtain to do this.

As in, you want the change in optimal objective value with respect to p and k? No, I don’t think you cannot use the duals to compute this.

You could use finite differences though.

Hi @odow,

Yes, you are totally right that MWE is not well-suited to study this.

Here below, I provide another MWE in which I study the sensibility of the objective value to a vector of parameters c.

In this MWE, I compare the result of the duals computed by JuMP with the gradient computed by finite differences.

The good thing is that with JuMP I get this in a much faster way :smiley: .

using JuMP, Ipopt, LinearAlgebra, Random, Gurobi, BenchmarkTools, MosekTools, NLopt, FiniteDifferences

const Ι = 250  #This is Greek Letter Iota

const J = 50

const p = 5.0

const k = 0.75

function distmat(J,Ι)

    Distances = zeros(J,Ι)

    Random.seed!(7777)

    coordinates_x = hcat([x for x = 1:J],fill(0.0,J))

    coordinates_y = hcat([x for x = 1:Ι].*J/Ι,fill(0.0,Ι))

    for j = 1:J, l=1:Ι

        Distances[j,l] = sqrt((coordinates_x[j,1]-coordinates_y[l,1])^2+(coordinates_x[j,2]-coordinates_y[l,2])^2)

    end

    return 1 .+ .1*Distances

end

const t = distmat(J,Ι)

const c = rand(J)

function solution_numerical_Mosek(p,k,t,c)

    opt = Model(Mosek.Optimizer)

    #set_silent(opt)

    @variable(opt, x[1:J,1:Ι] >= 0)

    @variable(opt, Q[1:J] >= 0)

    @variable(opt, Y[1:Ι] >= 0)

    @variable(opt, u[1:J] >= 0)

    @variable(opt, v[1:Ι] >= 0)

    @objective(

        opt,

        Min,

        -sum(u) + sum(v)  

    )

    @constraint(

        opt,

        [j = 1:J],

        sum(x[j,:])==Q[j],

    )

    @constraint(

        opt,

        [i = 1:Ι],

        sum(t[:,i].*x[:,i])==Y[i],

    )

    @constraint(

        opt,

        cons[j = 1:J],

        [Q[j],  c[j], u[j]] in MOI.PowerCone((p-1)/p)

    )

    @constraint(

        opt,

        [i = 1:Ι],

        [v[i], 1.0, Y[i]] in MOI.PowerCone(k)

    )

    JuMP.optimize!(opt)

    return objective_value(opt), value.(x), value.(u), value.(v), value.(Q), value.(Y), dual.(cons)

end

sol_Mosek = @time solution_numerical_Mosek(p,k,t,c)

func_diff(c0)=solution_numerical_Mosek(p,k,t,c0)[1]

func_diff(c)

sol_Mosek[1]

sol_Mosek[7]

G=grad(central_fdm(5,1),func_diff,c)

sol_Mosek[7]

with results

G[1]
 -0.24245699759274053
 -0.22357152022579846
 -0.20679247906289683
 -0.21760034288078534
 -0.24328110841196968
 -0.30542630754893374
  ⋮
 -0.26832512482815
 -0.2494482578121521
 -0.2684235670955047
 -0.2358964325406763
 -0.29762939455915755

and

sol_Mosek[7]

 [0.762515973362775, 0.24232273421054767, -0.9999999999456418]
 [0.7787216911409459, 0.22277212912850067, -0.9999999999370595]
 [0.7931027088597621, 0.20704855836748834, -0.999999999943988]
 [0.7828773176595666, 0.21807963027278515, -0.9999999999354549]
 [0.7617848184609916, 0.24325439205373628, -0.9999999999592387]
 [0.7194634527392707, 0.3057419236479007, -0.9999999999632796]
 ⋮
 [0.7431459759470492, 0.26859221583709564, -0.999999999962668]
 [0.7574372452326692, 0.24888763271384537, -0.9999999999499486]
 [0.7431459818699275, 0.26859220740933837, -0.9999999999963398]
 [0.7672181987625751, 0.23643641189661424, -0.9999999999282162]
 [0.7245949657845061, 0.29717255349696076, -1.0000000002071532]

so it seems to me that in each vector, the central element, measures the sensibility of the result to the parameter c.

My question, was exactly that, whether the order in which elements appear in the vector of the dual solutions correspond to the order in which we define the power cone.

1 Like

Yes.

Thank you for the clarification.

1 Like