I’m using IPNewton() from the Optim package for an inequality-constrained optimization problem. Strangely, I keep getting different results each time I run the same block of code (without changing anything). For example, the first time I ran the optimization, the result was this:
* Status: success
* Candidate solution
Final objective value: 1.000671e-02
* Found with
Algorithm: Interior Point Newton
* Convergence measures
|x - x'| = 0.00e+00 ≤ 0.0e+00
|x - x'|/|x'| = 0.00e+00 ≤ 0.0e+00
|f(x) - f(x')| = 0.00e+00 ≤ 0.0e+00
|f(x) - f(x')|/|f(x')| = 0.00e+00 ≤ 0.0e+00
|g(x)| = 6.73e-01 ≰ 1.0e-08
* Work counters
Seconds run: 63 (vs limit Inf)
Iterations: 132
f(x) calls: 243
∇f(x) calls: 243
I then copied and pasted my code into a new Jupyter notebook cell and ran it, and got this wildly different answer:
* Status: success
* Candidate solution
Final objective value: 1.706405e+00
* Found with
Algorithm: Interior Point Newton
* Convergence measures
|x - x'| = 0.00e+00 ≤ 0.0e+00
|x - x'|/|x'| = 0.00e+00 ≤ 0.0e+00
|f(x) - f(x')| = 0.00e+00 ≤ 0.0e+00
|f(x) - f(x')|/|f(x')| = 0.00e+00 ≤ 0.0e+00
|g(x)| = 1.48e+02 ≰ 1.0e-08
* Work counters
Seconds run: 147 (vs limit Inf)
Iterations: 260
f(x) calls: 457
∇f(x) calls: 457
The first time gave me an objective value that was 100x better! I’m very confused, as I expected the results to be identical. (And to be clear, I did not change the objective function in between runs.) As the trace shows, the initial function value in the two runs is the same, but the initial Lagrangian, gradient norm, and mu are all different:
Run 1:
Iter Lagrangian value Function value Gradient norm |==constr.| μ
0 4.269013e+05 4.268759e+05 2.175804e+06 0.000000e+00 5.52e-01
Run 2:
Iter Lagrangian value Function value Gradient norm |==constr.| μ
0 4.440598e+05 4.268759e+05 2.177987e+06 0.000000e+00 3.72e+02
I’ve done a few more runs and I continue to get different results each time.
So my questions are:
- Why is this happening? Does the algorithm involve some randomization or stochasticity?
- Is there a way to make it give the same results each time? I would really like for my results to be reproducible.
Update 1/29: Sorry for the lack of code. I’m working on making a MWE, though this is proving harder than expected. I tried copying and pasting the constrained optimization w/IPNewton example from the documentation, but it didn’t have the reproducibility issue (I got the same result each time I hit Ctrl+Enter). So now I’m working on making a simplified version of my code that still exhibits the reproducibility issue.