Hi @odow, thanks so much again for your help. After some trial and error I finally got things to work out, and numerically practically identical results. A couple of remaining questions. The first relates to the question of @gdalle about autodifferentiation. I find that the JuMP solution does not seem to detect the sparsity of the hessian (like ADiGator in MatLab). The Matlab output is:
This is Ipopt version 3.11.8, running with linear solver ma57.
Number of nonzeros in equality constraint Jacobian...: 0
Number of nonzeros in inequality constraint Jacobian.: 51788
Number of nonzeros in Lagrangian Hessian.............: 783
Total number of variables............................: 784
variables with only lower bounds: 242
variables with lower and upper bounds: 121
variables with only upper bounds: 0
Total number of equality constraints.................: 0
Total number of inequality constraints...............: 364
inequality constraints with only lower bounds: 0
inequality constraints with lower and upper bounds: 122
inequality constraints with only upper bounds: 242
iter objective inf_pr inf_du lg(mu) ||d|| lg(rg) alpha_du alpha_pr ls
0 -0.0000000e+00 2.10e-01 1.61e+00 -1.0 0.00e+00 - 0.00e+00 0.00e+00 0
Reallocating memory for MA57: lfact (245982)
Reallocating memory for MA57: lfact (307722)
Reallocating memory for MA57: lfact (468079)
Reallocating memory for MA57: lfact (575242)
1 -1.9344740e-05 2.10e-01 7.81e+00 -1.0 9.67e+01 -4.0 3.79e-01 2.00e-07f 2
2 -1.8633038e+01 9.08e-01 5.49e+00 -1.7 3.75e+01 - 6.43e-01 4.97e-01f 1
3 -1.7498744e+01 5.32e-01 4.99e+00 -2.5 2.12e+00 - 8.71e-01 5.34e-01h 1
4 -1.4461303e+01 1.73e-01 2.50e+00 -2.5 4.13e+00 - 1.00e+00 7.35e-01h 1
5 -1.1547139e+01 6.69e-02 2.38e+00 -2.5 2.91e+00 - 1.00e+00 1.00e+00h 1
6 -1.1768472e+01 2.06e-02 7.31e-01 -3.8 2.70e-01 - 6.47e-01 8.20e-01f 1
7 -1.1793453e+01 5.08e-04 8.70e-02 -3.8 2.50e-02 - 1.00e+00 1.00e+00h 1
8 -1.1791451e+01 0.00e+00 2.14e-04 -3.8 2.00e-03 - 1.00e+00 1.00e+00h 1
9 -1.1827654e+01 3.04e-07 2.34e-04 -8.6 3.63e-02 - 9.96e-01 9.98e-01f 1
iter objective inf_pr inf_du lg(mu) ||d|| lg(rg) alpha_du alpha_pr ls
10 -1.1827780e+01 0.00e+00 8.75e-05 -8.6 1.26e-04 - 7.76e-01 1.00e+00h 1
11 -1.1827784e+01 0.00e+00 2.65e-05 -8.6 3.57e-06 - 7.15e-01 1.00e+00h 1
12 -1.1827789e+01 0.00e+00 6.76e-06 -8.6 5.34e-06 - 8.07e-01 1.00e+00f 1
13 -1.1827790e+01 0.00e+00 1.18e-13 -8.6 5.97e-07 - 1.00e+00 1.00e+00h 1
Number of Iterations....: 13
(scaled) (unscaled)
Objective...............: -1.1827789684933572e+01 -1.1827789684933572e+01
Dual infeasibility......: 1.1752412313047493e-13 1.1752412313047493e-13
Constraint violation....: 0.0000000000000000e+00 0.0000000000000000e+00
Complementarity.........: 2.6791510920999567e-09 2.6791510920999567e-09
Overall NLP error.......: 2.6791510920999567e-09 2.6791510920999567e-09
Number of objective function evaluations = 15
Number of objective gradient evaluations = 14
Number of equality constraint evaluations = 0
Number of inequality constraint evaluations = 15
Number of equality constraint Jacobian evaluations = 0
Number of inequality constraint Jacobian evaluations = 14
Number of Lagrangian Hessian evaluations = 13
Total CPU secs in IPOPT (w/o function evaluations) = 0.271
Total CPU secs in NLP function evaluations = 0.863
EXIT: Optimal Solution Found.
The Julia Output is:
This is Ipopt version 3.14.4, running with linear solver MUMPS 5.4.1.
Number of nonzeros in equality constraint Jacobian...: 0
Number of nonzeros in inequality constraint Jacobian.: 51788
Number of nonzeros in Lagrangian Hessian.............: 51425
Total number of variables............................: 784
variables with only lower bounds: 242
variables with lower and upper bounds: 121
variables with only upper bounds: 0
Total number of equality constraints.................: 0
Total number of inequality constraints...............: 364
inequality constraints with only lower bounds: 0
inequality constraints with lower and upper bounds: 122
inequality constraints with only upper bounds: 242
iter objective inf_pr inf_du lg(mu) ||d|| lg(rg) alpha_du alpha_pr ls
0 0.0000000e+00 7.49e-03 1.38e+00 -1.0 0.00e+00 - 0.00e+00 0.00e+00 0
1 9.8355390e+01 8.31e-01 8.65e+01 -1.0 7.90e+03 - 9.90e-01 1.24e-02f 1
2 3.6539782e+01 1.89e-01 5.74e+01 -1.0 8.53e+01 - 9.90e-01 7.24e-01h 1
3 1.4808214e+01 2.37e-02 2.42e+01 -1.7 2.64e+01 - 8.92e-01 8.25e-01h 1
4 7.9511084e+00 1.61e-03 1.30e+01 -1.7 6.86e+00 - 7.53e-01 1.00e+00h 1
5 8.0558347e+00 1.55e-03 1.88e+01 -2.5 3.55e+00 - 8.41e-01 2.95e-02f 1
6 1.0874882e+01 1.95e-02 1.40e+00 -2.5 2.82e+00 - 9.97e-01 1.00e+00f 1
7 1.1565232e+01 8.65e-03 5.65e-01 -3.8 9.02e-01 - 7.44e-01 7.66e-01f 1
8 1.1790340e+01 2.14e-04 2.33e-02 -3.8 2.25e-01 - 1.00e+00 1.00e+00f 1
9 1.1791296e+01 0.00e+00 4.08e-05 -3.8 9.56e-04 - 1.00e+00 1.00e+00h 1
iter objective inf_pr inf_du lg(mu) ||d|| lg(rg) alpha_du alpha_pr ls
10 1.1827237e+01 0.00e+00 1.33e-03 -8.6 3.64e-02 - 9.96e-01 9.87e-01f 1
11 1.1827780e+01 0.00e+00 1.79e-04 -8.6 5.43e-04 - 7.75e-01 1.00e+00h 1
12 1.1827784e+01 0.00e+00 5.21e-05 -8.6 3.50e-06 - 7.80e-01 1.00e+00h 1
13 1.1827790e+01 0.00e+00 3.19e-06 -8.6 5.90e-06 - 9.46e-01 1.00e+00f 1
14 1.1827790e+01 0.00e+00 3.67e-14 -8.6 2.77e-08 - 1.00e+00 1.00e+00h 1
Number of Iterations....: 14
(scaled) (unscaled)
Objective...............: -1.1827789680100521e+01 1.1827789680100521e+01
Dual infeasibility......: 3.6653988763450662e-14 3.6653988763450662e-14
Constraint violation....: 0.0000000000000000e+00 0.0000000000000000e+00
Variable bound violation: 0.0000000000000000e+00 0.0000000000000000e+00
Complementarity.........: 2.5669821264002667e-09 2.5669821264002667e-09
Overall NLP error.......: 2.5669821264002667e-09 2.5669821264002667e-09
Number of objective function evaluations = 15
Number of objective gradient evaluations = 15
Number of equality constraint evaluations = 0
Number of inequality constraint evaluations = 15
Number of equality constraint Jacobian evaluations = 0
Number of inequality constraint Jacobian evaluations = 15
Number of Lagrangian Hessian evaluations = 14
Total seconds in IPOPT = 0.723
EXIT: Optimal Solution Found.
Here I’ve set up the problem exactly the way it is in Matlab. with upper and lower bounds (starting values are not set in either Matlab or Julia in this case):
function build_model(auxdata)
param = dict_to_namedtuple(auxdata[:param])
graph = auxdata[:graph]
kappa_ex = auxdata[:kappa_ex]
A = auxdata[:A]
psigma = (param.sigma - 1) / param.sigma
Hj = param.Hj
# Model
model = Model(Ipopt.Optimizer)
# Variables + Bounds
@variable(model, u) # Overall utility
# set_start_value(u, 0.0)
@variable(model, Cjn[1:graph.J, 1:param.N] >= 1e-8) # Good specific consumption
# set_start_value.(Cjn, 1.0e-6)
@variable(model, Qin[1:graph.ndeg, 1:param.N]) # Good specific flow
# set_start_value.(Qin, 0.0)
@variable(model, 1e-8 <= Lj[1:graph.J] <= 1) # Total Labour
# set_start_value.(Lj, 1 / graph.J)
@variable(model, Ljn[1:graph.J] >= 1e-8) # Good specific labour
# set_start_value.(Ljn, 1 / (graph.J * param.N))
@objective(model, Max, u)
# Utility constraint (Lj*u <= ... )
for j in 1:graph.J
Cj = sum(Cjn[j, n]^psigma for n in 1:param.N)^(1 / psigma)
@constraint(model, Lj[j] * u - (Cj / param.alpha)^param.alpha * (Hj[j] / (1 - param.alpha))^(1 - param.alpha) <= -1e-8)
end
Yjn = param.Zjn .* Ljn.^param.a
# balanced flow constraints
@constraint(
model,
[n in 1:param.N, j in 1:param.J],
Cjn[j, n] + sum(A[j, i] * Qin[i, n] for i in 1:graph.ndeg) -
Yjn[j, n] + sum(
max(ifelse(Qin[i, n] > 0, A[j, i], -A[j, i]), 0) *
abs(Qin[i, n])^(1 + param.beta) / kappa_ex[i]
for i in 1:graph.ndeg
) <= -1e-8
)
# labor resource constraint
@constraint(model, -1e-8 <= sum(Lj) - 1 <= 1e8)
# Local labor availability constraints ( sum Ljn <= Lj )
@constraint(model, -1e-8 .<= sum(Ljn, dims=2) .- Lj .<= 1e-8)
return model
end
I have experimented around with using equality constraints or relinguishing the bounds, but then the solution takes much longer and the results are not the same, so I guess the authors of the model knew what they were doing when specifying bounds on everything. But yeah, for larger problems a way of autodifferentiation that detects sparseness in the Hessian would be great.
I was also not sure if this is the recommended way to do so, but I am currently accessing the results using
results = Dict(k => value.(v) for (k, v) in model.obj_dict)
Also: the library actually iterates over these solutions for different values of auxdata (the task is actually finding the economically optimal transport network, so each iteration does some changes to the weights attached to graph edges: kappa_ex
represents the cost of moving goods along each edge which depends on road network - one reason I am very interested in computational efficiency in derivative evaluation). I thus also wondered if there is a more efficient way of setting this up than calling build_model(auxdata)
in each iteration (i.e. building the model once and just changing its parameters in each iteration)? Many thanks again for all your help!