Problem setting the Local Optimizer

I have problem with the Augmented Lagrangian Method. With my syntax algorithm does not move and gets a forced stop. I am almost sure problem is with the setting of the local optimizer. I am not very familiar with the Julia syntax. let me give you my code:

function myFunc(x::Vector, grad::Vector, cost::Vector)
    if length(grad) > 0
        grad[1:22500] = cost
        grad[22501:22650] = 1000000*x[22501:22650] ./ (10000*x[22501:22650].^2 .+ 1).^2
    end
    return sum(cost.*x[1:22500]) + 50*sum(10*x[22501:22650].^2 ./ (10*x[22501:22650] .+ 0.001))
end

function inEQ_1(x::Vector, grad::Vector, index_x, index_y)
    if length(grad) > 0
        grad[150*(index_x-1) + index_y] = 1
        grad[22500 + index_y] = -1
        grad[Not(150*(index_x-1) + index_y, 22500 + index_y)] .= 0
    end
    x[150*(index_x-1) + index_y] - x[22500 + index_y]
end

function EQ_1(x::Vector, grad::Vector, index_x)
    if length(grad) > 0
        grad[150*(index_x-1)+1:150*index_x] .= 1
        grad[Not(150*(index_x-1)+1:150*index_x)] .= 0
    end
    sum(x[150*(index_x-1)+1:150*index_x]) - 1
end

function EQ_k(x::Vector, grad::Vector)
    if length(grad) > 0
        grad[1:22500] .= 0
        grad[22501:22650] .= 1
    end
    sum(grad[22501:22650]) - 3
end

opt = Opt(:LD_AUGLAG_EQ, 22650)
opt.lower_bounds = 0
opt.upper_bounds = 1
opt.xtol_rel = 1e-4

min_objective!(opt, (x, grad) -> myFunc(x,grad,cost))

opt.local_optimizer = Opt(:LD_MMA, 22650)

for i in 1:150
    for j in 1:150
        inequality_constraint!(opt, (x,g) -> inEQ_1(x,g,j,i), 1e-8)
    end
end

for i in 1:150
    equality_constraint!(opt, (x,g) -> EQ_1(x,g,i), 1e-8)
end
equality_constraint!(opt, (x,g) -> EQ_k(x,g), 1e-8)

(minf,minx,ret) = optimize(opt, zeros(22650) .+ 1/150)

This gives me the output:

(Inf, [0.006666666666666667, 0.006666666666666667, 0.006666666666666667, 0.006666666666666667, 0.006666666666666667, 0.006666666666666667, 0.006666666666666667, 0.006666666666666667, 0.006666666666666667, 0.006666666666666667 … 0.006666666666666667, 0.006666666666666667, 0.006666666666666667, 0.006666666666666667, 0.006666666666666667, 0.006666666666666667, 0.006666666666666667, 0.006666666666666667, 0.006666666666666667, 0.006666666666666667], :FORCED_STOP)

These values are the starting point. Could you please show me how to code this correctly. I believe minor changes are required but I don’t know what they are

What is the cost vector? FORCED_STOP usually means that there was an error somewhere.

Cost vector is the vector is the distances of each point in the data set to each other point in the dataset. It is built as such:

using CSV
using DataFrames
using Distances
using StatsBase

df = DataFrame(CSV.File("iris.csv"))
data = Matrix(df[:,2:5])
dt = fit(ZScoreTransform, data, dims=1)
data = StatsBase.transform(dt,data)

cost = pairwise(Euclidean(), data, dims=1)
cost = vec(reshape(cost,22500,1))

This is a version of the p-median problem in Operations Research. What about me setting the secondary algorithm? Is that correct?

The problem is that LD_MMA supports nonlinear inequality constraints, but not equality constraints.:
https://nlopt.readthedocs.io/en/latest/NLopt_Algorithms/#mma-method-of-moving-asymptotes-and-ccsa

You could try :LD_SLSQP instead.

Alternatively, it might be more readable if you used JuMP:

using JuMP, Ipopt
model = Model(Ipopt.Optimizer)
@variable(model, 0 <= x[1:22650] <= 1, start = 1 / 150)
@NLobjective(
    model,
    Min,
    sum(cost[i] * x[i] for i in 1:22500) +
    50 * sum(10 * x[i]^2 / (10 * x[i] + 0.001) for i in 22501:22650),
)
@constraint(model, [i=1:150, j=1:150], x[150*(i-1)+j] <= x[22500+j])
@constraint(model, [i=1:150], sum(x[150*(i-1)+j] for j in 1:150) == 1)
@constraint(model, sum(x[i] for i in 22501:22650) == 3)
optimize!(model)
3 Likes

Thank you so much for your help. It worked and it is fast and gives great results.

1 Like