JuMP nonlinear optimization not converging for camera calibration code

I am using JuMP optimization to compute distortion correction parameters in camera images. This is a first step toward a more complete camera calibration package.

It’s the first time I’ve used JuMP and the optimization is not converging as I would expect. When started with the parameters bounded in small intervals that contain the optimal parameter values it iterates tens of thousands of times without appearing to be any closer to convergence.

To try to understand what was happening I passed the optimal distortion parameter values as a starting point. This has an objective function value of approximately 1e-10.

I expected the optimization to iterate a few times before quitting with a value very close to optimal. Instead
on the first iteration JuMP reports this objective value. But, on the next iteration it chooses a different point with an objective value of roughly .04. After that the optimization seems never to stop iterating, always yielding objective points with objective values 6 to 7 orders of magnitude larger than the first one.

Can someone give me insight into why JuMP might do this?

Unfortunately I don’t have a really simple example that exhibits this behavior. If anybody can spot an error in the way I am defining the JuMP model or using the JuMP API I would really appreciate it.

Here is the definition of the JuMP model (dataset is a 2D array of distorted 2D points)

‘’’
function jumpdistortionparameters(dataset)
model = Model(with_optimizer(Ipopt.Optimizer))
@variable(model, -.2 <= K1 <= .2, start = 1.105763E-01)
@variable(model, -.2 <= K2 <= .2, start = 1.886214E-02)
@variable(model, -.02 <= K3 <= .02, start = 1.473832E-02)
@variable(model, -.02 <= P1 <= .02, start = -8.448460E-03)
@variable(model, -.02 <= P2 <= .02, start = -7.356744E-03 )

allvariables = all_variables(model)
register(model,:objective, length(allvariables),objective,autodiff = true)
@NLobjective(model, Min, objective(K1,K2,K3,P1,P2))

optimize!(model)
println("K1 =  $(value(K1)) K2 = $(value(K2)) K3 = $(value(K3)) P1 = $(value(P1)) P2 = $(value(P2))")

end
‘’’

and here is the definition of the objective function (objectivedataset is a 2D array of distorted 2D points and undistort is a function which transforms distorted to undistorted points given the distortion parameters):

‘’’
function objective(x…)
undistortedpoints = undistort(objectivedataset,x) #x contains the distortion parameters

numrows = size(undistortedpoints)[1]
totalresidual = 0.0

for i in 1:numrows
    line = undistortedpoints[i,:]
    undistortedestimate,residual = bestfitvector(tuplesto2Darray(line))
    totalresidual += residual 
end

return totalresidual

end
‘’’

bestfitvector uses a Jacobi SVD code I wrote to compute how much the distorted points deviate from a straight line. As the distortion parameters get closer to the correct values the undistorted points will come closer to lying on a line and the residual will drop to zero. I have verified that this code can be differentiated by ForwardDiff.

Here is the JuMP output:

‘’’
This is Ipopt version 3.12.10, running with linear solver mumps.
NOTE: Other linear solvers might be more efficient (see Ipopt documentation).

Number of nonzeros in equality constraint Jacobian…: 0
Number of nonzeros in inequality constraint Jacobian.: 0
Number of nonzeros in Lagrangian Hessian…: 0

Total number of variables…: 5
variables with only lower bounds: 0
variables with lower and upper bounds: 5
variables with only upper bounds: 0
Total number of equality constraints…: 0
Total number of inequality constraints…: 0
inequality constraints with only lower bounds: 0
inequality constraints with lower and upper bounds: 0
inequality constraints with only upper bounds: 0

iter objective inf_pr inf_du lg(mu) ||d|| lg(rg) alpha_du alpha_pr ls
0 1.6493575e-10 0.00e+00 1.46e+00 0.0 0.00e+00 - 0.00e+00 0.00e+00 0
1 3.7608722e-02 0.00e+00 1.77e+00 -1.3 5.74e-02 - 9.77e-01 1.99e-01f 3
2 2.3663627e-02 0.00e+00 1.22e+00 -1.4 7.34e-03 - 1.00e+00 1.00e+00f 1
3 1.9094371e-02 0.00e+00 2.07e+00 -1.4 5.44e-01 - 4.00e-01 4.21e-02f 4
4 1.5318250e-02 0.00e+00 2.00e+00 -1.4 1.01e-02 - 1.00e+00 1.00e+00f 1
5 9.8206258e-03 0.00e+00 1.21e+00 -1.4 8.41e-03 - 1.00e+00 1.00e+00f 1
6 3.5490506e-03 0.00e+00 6.91e-01 -1.4 9.45e-03 - 1.00e+00 1.00e+00f 1
7 2.8064126e-03 0.00e+00 1.08e+00 -1.4 4.14e-03 - 1.00e+00 1.00e+00f 1
8 4.6360623e-03 0.00e+00 1.61e-01 -1.4 2.71e-03 - 1.00e+00 1.00e+00f 1
9 2.4130667e-03 0.00e+00 1.99e+00 -2.1 5.66e-03 - 1.00e+00 1.00e+00f 1
iter objective inf_pr inf_du lg(mu) ||d|| lg(rg) alpha_du alpha_pr ls
10 2.1865776e-03 0.00e+00 8.41e-01 -2.1 4.93e-03 - 1.00e+00 1.00e+00f 1
11 1.6024980e-03 0.00e+00 9.13e-01 -2.1 1.70e-03 - 1.00e+00 1.00e+00f 1
12 1.4295781e-03 0.00e+00 1.48e-01 -2.1 1.01e-03 - 1.00e+00 5.00e-01f 2
13 1.3527779e-03 0.00e+00 1.06e+00 -2.1 8.07e-04 - 1.00e+00 5.00e-01f 2
14 1.1884357e-03 0.00e+00 5.38e-01 -2.1 1.02e-03 - 1.00e+00 1.00e+00f 1
15 1.1872245e-03 0.00e+00 2.11e-01 -2.1 1.36e-03 - 1.00e+00 5.00e-01f 2
16 1.1261261e-03 0.00e+00 1.60e-01 -2.1 6.94e-04 - 1.00e+00 1.00e+00f 1
17 9.1716085e-04 0.00e+00 7.19e-02 -2.1 9.77e-04 - 1.00e+00 1.00e+00f 1
18 6.9617556e-04 0.00e+00 5.54e-01 -3.1 1.52e-03 - 1.00e+00 1.00e+00f 1
‘’’

I can’t see anything explicitly incorrect here, but it’s difficult to troubleshoot further without the ability to run the problem myself.

Is it possible for you to bundle a small example for us?

FWIW, I’ve had a lot of convergence issues when using MUMPS as my linear solver, switching to HSL has been a breath of fresh air (if you have access to their solvers).

Ipopt assumes your model is twice-differentiable and convex in the variables. Is that the case?

Also, Ipopt uses absolute tolerances, but it pre-scales the model. There might be some numerical issues associated with that. I would try re-scaling your problem by a few orders of magnitude and see if anything changes.

Since you don’t appear to have any constraints (just variable bounds), you might be better served with: