AmplNLWriter no longer works with JuMP

I was using AmplNLWriter with Ipopt_jll, but it stopped working when I installed MadNLP. I have removed all my packages and reinstalled (excluding MaNLP), but I have not been able to make AmplNLWriter to work. The error message suggests that the solver has not found a solution, but that is incorrect as I cannot see it showing any iterations or progress from the solver’s log. It seems like it is not passing the problem to the solver.

The same code works fine when I use Ipopt with JuMP directly (i.e., without AmplNLWriter) but it is incredibly slow. For a moderate size problem, JuMP is not able to generate the model even when left for few hours, i.e., it does not progresses to the stage of solving. When using the solver via the AmplNLWriter, it was at least able to solve the problem. The issue seems to be with AmplNLWriter rather than the solver (as I have tried quite a few of them).

Could anyone please help with this? I have given a MWE and error message below.


Result index of attribute MathOptInterface.VariablePrimal(1) out of bounds. There are currently 0 solution(s) in the model.
in eval at base\boot.jl:360 
in top-level scope at untitled:30
in nlp_model at untitled:19
in value at JuMP\klrjG\src\variables.jl:943 
in var"#value#30" at JuMP\klrjG\src\variables.jl:943
in get at JuMP\klrjG\src\JuMP.jl:1232
in _moi_get_result at JuMP\klrjG\src\JuMP.jl:1199
in get at MathOptInterface\YDdD3\src\Utilities\cachingoptimizer.jl:757
in get at MathOptInterface\YDdD3\src\Bridges\bridge_optimizer.jl:1039
in get at MathOptInterface\YDdD3\src\Utilities\cachingoptimizer.jl:757
in get at AmplNLWriter\7riG9\src\AmplNLWriter.jl:1176 
in check_result_index_bounds at MathOptInterface\YDdD3\src\attributes.jl:139 
using Ipopt, JuMP, AmplNLWriter, Ipopt_jll

function nlp_model(p, c1, c2,b, k, n)
    #model = Model(Ipopt.Optimizer)
    model = Model(() -> AmplNLWriter.Optimizer(Ipopt_jll.amplexe))
    @variable(model, 0 <= x[i = 1:n] <= 1)
    @variable(model, 0<=var1<=1)
    @variable(model, 0<=var2<=1)
    @variable(model, 0<=var3<=1)

    @objective(model,Max,var1-var2+var3)
    @NLconstraint(model,con3,sum(x[i]*p[i] for i in 1:n)-sum(b[j]/(1+var1)^j for j in 1:k) == 0)
    @NLconstraint(model,con5,sum(x[i]*p[i] for i in 1:n)-sum(c1[j,i] *x[i] / (1+var2)^j for i in 1:n, j in 1:k) == 0)
    @NLconstraint(model,con6,sum(x[i]*p[i] for i in 1:n)-sum(c2[j,i] *x[i] / (1+var3)^j for i in 1:n, j in 1:k) == 0)

    @constraint(model, con1, c1*x .>= b)
    JuMP.optimize!(model)
    println("")
    println("variable l = $(value(var1)) , variable 2 = $(value(var2)) , variable 3 = $(value(var3))")

end

function data()
    p = rand(400:700,5,1);   c1= (rand(100:200,4,5))'; c2 = 0.9 .* c1
    b =rand(150:250,5,1);  n= size(c1,2);  k = size(c1,1)
    return p, c1, c2, b, n, k
end

p, c1, c2, b, n, k = data()
nlp_model(p, c1, c2,b, k, n)

Package versions

[7c4d4715] AmplNLWriter v0.7.2
  [c52e3926] Atom v0.12.35
  [336ed68f] CSV v0.9.10
  [9961bab8] Cbc v0.8.1
  [e2554f3b] Clp v0.8.4
  [a93c6f00] DataFrames v1.2.2
  [b6b21f68] Ipopt v0.7.0
  [4076af6c] JuMP v0.21.10
  [e5e0dc1b] Juno v0.8.4
  [29cba6d7] Bonmin_jll v100.800.800+0
  [9cc047cb] Ipopt_jll v3.13.4+2
  [ade2ca70] Dates
  [37e2e46d] LinearAlgebra

Checking raw_status(model) suggests that AmplNLWriter is unable to call the Ipopt_jll solver.

Error calling the solver. Failed with: MethodError(AmplNLWriter.call_solver, (AmplNLWriter._DefaultSolverCommand{typeof(Ipopt_jll.amplexe)}(Ipopt_jll.amplexe), "C:\\Users\\AppData\\Local\\Temp\\jl_ReON4y\\model.nl", Any[], Base.TTY(Base.Libc.WindowsRawSocket(0x0000000000000240) open, 0 bytes waiting), Base.TTY(Base.Libc.WindowsRawSocket(0x0000000000000248) open, 0 bytes waiting)), 0x0000000000007484)
1 Like

I can’t reproduce this on a clean install:

(amplnl) pkg> st
      Status `/private/tmp/amplnl/Project.toml`
  [7c4d4715] AmplNLWriter v0.7.2
  [4076af6c] JuMP v0.21.10
  [9cc047cb] Ipopt_jll v3.13.4+2

julia> using JuMP, AmplNLWriter, Ipopt_jll
[ Info: Precompiling AmplNLWriter [7c4d4715-977e-5154-bfe0-e096adeac482]

julia> function nlp_model(p, c1, c2,b, k, n)
           #model = Model(Ipopt.Optimizer)
           model = Model(() -> AmplNLWriter.Optimizer(Ipopt_jll.amplexe))
           @variable(model, 0 <= x[i = 1:n] <= 1)
           @variable(model, 0<=var1<=1)
           @variable(model, 0<=var2<=1)
           @variable(model, 0<=var3<=1)

           @objective(model,Max,var1-var2+var3)
           @NLconstraint(model,con3,sum(x[i]*p[i] for i in 1:n)-sum(b[j]/(1+var1)^j for j in 1:k) == 0)
           @NLconstraint(model,con5,sum(x[i]*p[i] for i in 1:n)-sum(c1[j,i] *x[i] / (1+var2)^j for i in 1:n, j in 1:k) == 0)
           @NLconstraint(model,con6,sum(x[i]*p[i] for i in 1:n)-sum(c2[j,i] *x[i] / (1+var3)^j for i in 1:n, j in 1:k) == 0)

           @constraint(model, con1, c1*x .>= b)
           JuMP.optimize!(model)
           println("")
           println("variable l = $(value(var1)) , variable 2 = $(value(var2)) , variable 3 = $(value(var3))")

       end
nlp_model (generic function with 1 method)

julia> function data()
           p = rand(400:700,5,1);   c1= (rand(100:200,4,5))'; c2 = 0.9 .* c1
           b =rand(150:250,5,1);  n= size(c1,2);  k = size(c1,1)
           return p, c1, c2, b, n, k
       end
data (generic function with 1 method)

julia> p, c1, c2, b, n, k = data()
([496; 611; … ; 556; 538], [148 132 147 168; 143 103 151 131; … ; 187 107 165 168; 136 116 113 115], [133.20000000000002 118.8 132.3 151.20000000000002; 128.70000000000002 92.7 135.9 117.9; … ; 168.3 96.3 148.5 151.20000000000002; 122.4 104.4 101.7 103.5], [214; 236; … ; 245; 230], 4, 5)

julia> nlp_model(p, c1, c2,b, k, n)
Ipopt 3.13.4: 

******************************************************************************
This program contains Ipopt, a library for large-scale nonlinear optimization.
 Ipopt is released as open source code under the Eclipse Public License (EPL).
         For more information visit https://github.com/coin-or/Ipopt
******************************************************************************

This is Ipopt version 3.13.4, running with linear solver mumps.
NOTE: Other linear solvers might be more efficient (see Ipopt documentation).

Number of nonzeros in equality constraint Jacobian...:       15
Number of nonzeros in inequality constraint Jacobian.:       20
Number of nonzeros in Lagrangian Hessian.............:       11

Total number of variables............................:        7
                     variables with only lower bounds:        0
                variables with lower and upper bounds:        7
                     variables with only upper bounds:        0
Total number of equality constraints.................:        3
Total number of inequality constraints...............:        5
        inequality constraints with only lower bounds:        5
   inequality constraints with lower and upper bounds:        0
        inequality constraints with only upper bounds:        0

iter    objective    inf_pr   inf_du lg(mu)  ||d||  lg(rg) alpha_du alpha_pr  ls
   0 -9.9999900e-03 1.03e+03 1.02e+00  -1.0 0.00e+00    -  0.00e+00 0.00e+00   0
   1  1.9579652e-02 1.01e+03 3.78e+00  -1.0 3.68e+01    -  7.24e-03 2.26e-02h  1
   2  2.4164720e-02 9.78e+02 2.83e+00  -1.0 2.61e+01    -  1.48e-02 3.26e-02h  1
   3  2.4885037e-02 9.37e+02 2.06e+00  -1.0 6.60e+01    -  3.19e-02 4.28e-02h  1
   4  2.0337198e-02 6.46e+02 2.13e+01  -1.0 6.64e+01    -  6.08e-02 3.10e-01f  1
   5  3.0496669e-02 4.64e+02 1.68e+01  -1.0 4.90e+01    -  2.80e-01 2.85e-01h  1
   6  2.1298312e-02 6.86e+01 5.29e+01  -1.0 3.09e+01    -  3.76e-01 1.00e+00f  1
   7  1.4099605e-02 2.21e+00 6.81e+00  -1.0 2.21e+00    -  9.51e-01 1.00e+00h  1
   8  1.0793746e-02 6.61e-01 2.75e+00  -1.0 1.87e+01    -  7.32e-01 1.00e+00f  1
   9  2.3359836e-02 1.03e+00 1.14e-01  -1.0 9.52e+00    -  1.00e+00 1.00e+00h  1
iter    objective    inf_pr   inf_du lg(mu)  ||d||  lg(rg) alpha_du alpha_pr  ls
  10  2.1091011e-02 3.66e-02 3.98e-01  -2.5 9.58e-01    -  9.11e-01 1.00e+00h  1
  11  2.3162260e-03 8.23e-01 1.80e-01  -2.5 1.23e+01    -  9.61e-01 7.05e-01H  1
  12 -2.6152134e-03 1.72e-01 2.93e-03  -2.5 2.94e+00    -  1.00e+00 1.00e+00h  1
  13 -2.2378732e-03 8.36e-04 1.71e-05  -2.5 8.70e-01    -  1.00e+00 1.00e+00h  1
  14 -9.3178524e-03 3.22e-01 1.34e-03  -3.8 2.97e+00    -  8.88e-01 9.31e-01h  1
  15 -1.1340947e-02 3.13e-02 7.85e-05  -3.8 2.76e+00    -  1.00e+00 1.00e+00h  1
  16 -1.1274751e-02 2.70e-05 4.40e-08  -3.8 1.32e-01    -  1.00e+00 1.00e+00h  1
  17 -1.1872683e-02 2.31e-03 1.21e-05  -5.7 3.33e-01    -  9.87e-01 9.84e-01h  1
  18 -1.1880387e-02 4.04e-07 1.02e-09  -5.7 3.21e-03    -  1.00e+00 1.00e+00h  1
  19 -1.1887776e-02 3.52e-07 7.52e-10  -8.6 3.77e-03    -  1.00e+00 1.00e+00h  1
iter    objective    inf_pr   inf_du lg(mu)  ||d||  lg(rg) alpha_du alpha_pr  ls
  20 -1.1887775e-02 3.10e-13 2.51e-14  -8.6 2.26e-06    -  1.00e+00 1.00e+00h  1

Number of Iterations....: 20

                                   (scaled)                 (unscaled)
Objective...............:  -1.1887775027987380e-02   -1.1887775027987380e-02
Dual infeasibility......:   2.5059035596809423e-14    1.8425697038399239e-14
Constraint violation....:   1.5906112343410032e-13    3.1048731294336385e-13
Complementarity.........:   2.5059057582680507e-09    2.5059057582680507e-09
Overall NLP error.......:   2.5059057582680507e-09    2.5059057582680507e-09


Number of objective function evaluations             = 22
Number of objective gradient evaluations             = 21
Number of equality constraint evaluations            = 22
Number of inequality constraint evaluations          = 22
Number of equality constraint Jacobian evaluations   = 21
Number of inequality constraint Jacobian evaluations = 21
Number of Lagrangian Hessian evaluations             = 20
Total CPU secs in IPOPT (w/o function evaluations)   =      0.054
Total CPU secs in NLP function evaluations           =      0.002

EXIT: Optimal Solution Found.

variable l = 0.05424865812663058 , variable 2 = 0.13581364085638178 , variable 3 = 0.09345275775773859

What is

] st -m

And what’s a moderately sized problem? Ipopt.jl shouldn’t take hours if it can be solved with AmplNLWriter.

Thanks for looking in to it. This is a strange one as I have tried using environments as well but it still does not work. If I restart Julia then the MWE above works, but when I run it on real data (on which it was working until yesterday) I get the above error message. And then if I try to re-run the MWE, it also gives the same error.

] st -m gives:

 [1520ce14] AbstractTrees v0.3.4
  [7c4d4715] AmplNLWriter v0.7.2 `https://github.com/jump-dev/AmplNLWriter.jl#master`
  [bf4720bc] AssetRegistry v0.1.0
  [c52e3926] Atom v0.12.35
  [6e4b80f9] BenchmarkTools v1.2.0
  [b99e7846] BinaryProvider v0.5.10
  [fa961155] CEnum v0.4.1
  [00ebfdb7] CSTParser v2.5.0
  [336ed68f] CSV v0.9.10
  [49dc2e85] Calculus v0.5.1
  [9961bab8] Cbc v0.8.1
  [d360d2e6] ChainRulesCore v1.11.0
  [e2554f3b] Clp v0.8.4
  [53a63b46] CodeTools v0.7.1
  [da1fd8a2] CodeTracking v1.0.6
  [523fee87] CodecBzip2 v0.7.2
  [944b1d66] CodecZlib v0.7.0
  [3da002f7] ColorTypes v0.11.0
  [5ae59095] Colors v0.12.8
  [a80b9123] CommonMark v0.8.3
  [bbf7d656] CommonSubexpressions v0.3.0
  [34da2185] Compat v3.40.0
  [a8cc5b0e] Crayons v4.0.4
  [9a962f9c] DataAPI v1.9.0
  [a93c6f00] DataFrames v1.2.2
  [864edb3b] DataStructures v0.18.10
  [e2d170a0] DataValueInterfaces v1.0.0
  [44e31299] DayCounts v0.1.0
  [163ba53b] DiffResults v1.0.3
  [b552c78f] DiffRules v1.3.1
  [b4f34e82] Distances v0.10.6
  [33d173f1] DocSeeker v0.4.3
  [ffbed154] DocStringExtensions v0.8.6
  [e30172f5] Documenter v0.26.3
  [5789e2e9] FileIO v1.11.2
  [48062228] FilePathsBase v0.9.13
  [53c48c17] FixedPointNumbers v0.8.4
  [08572546] FlameGraphs v0.2.6
  [59287772] Formatting v0.4.2
  [f6369f11] ForwardDiff v0.10.21
  [de31a74c] FunctionalCollections v0.5.0
  [fb4132e2] FuzzyCompletions v0.4.3
  [cd3eb016] HTTP v0.9.16
  [9fb69e20] Hiccup v0.2.2
  [b5f81e59] IOCapture v0.1.1
  [9b13fd28] IndirectArrays v0.5.1
  [83e8ac13] IniFile v0.5.0
  [842dd82b] InlineStrings v1.0.1
  [3587e190] InverseFunctions v0.1.1
  [41ab1584] InvertedIndices v1.1.0
  [92d709cd] IrrationalConstants v0.1.1
  [82899510] IteratorInterfaceExtensions v1.0.0
  [692b3bcd] JLLWrappers v1.3.0
  [682c06a0] JSON v0.21.2
  [7d188eb4] JSONSchema v0.3.4
  [4076af6c] JuMP v0.21.10
  [98e50ef6] JuliaFormatter v0.13.7
  [aa1ae85d] JuliaInterpreter v0.8.21
  [e5e0dc1b] Juno v0.8.4
  [7c4cb9fa] LNR v0.2.1
  [50d2b5c4] Lazy v0.15.1
  [1d6d02ad] LeftChildRightSiblingTrees v0.1.2
  [2ab3a3ac] LogExpFunctions v0.3.4
  [1914dd2f] MacroTools v0.5.9
  [b8f27783] MathOptInterface v0.9.22
  [739be429] MbedTLS v1.0.3
  [e89f7d12] Media v0.5.0
  [e1d29d7a] Missings v1.0.2
  [d8a4904e] MutableArithmetics v0.2.22
  [77ba4419] NaNMath v0.3.5
  [510215fc] Observables v0.4.0
  [bac558e1] OrderedCollections v1.4.1
  [69de0a69] Parsers v2.1.1
  [fa939f87] Pidfile v1.2.0
  [2dfb63ee] PooledArrays v1.3.0
  [21216c6a] Preferences v1.2.2
  [08abe8d2] PrettyTables v1.2.3
  [189a3867] Reexport v1.2.2
  [ae029012] Requires v1.1.3
  [91c51154] SentinelArrays v1.3.8
  [a2af1166] SortingAlgorithms v1.0.1
  [276daf66] SpecialFunctions v1.8.1
  [90137ffa] StaticArrays v1.2.13
  [82ae8749] StatsAPI v1.0.0
  [88034a9c] StringDistances v0.10.0
  [3783bdb8] TableTraits v1.0.1
  [bd369af6] Tables v1.6.0
  [0796e94c] Tokenize v0.5.21
  [3bb67fe8] TranscodingStreams v0.9.6
  [a2a6695c] TreeViews v0.3.0
  [30578b45] URIParser v0.4.1
  [5c2747f8] URIs v1.3.0
  [ea10d353] WeakRefStrings v1.4.1
  [0f1e0344] WebIO v0.8.16
  [104b5d7c] WebSockets v1.5.9
  [cc8bc4a8] Widgets v0.6.4
  [ae81ac8f] ASL_jll v0.1.2+0
  [29cba6d7] Bonmin_jll v100.800.800+0
  [6e34b625] Bzip2_jll v1.0.8+0
  [38041ee0] Cbc_jll v200.1000.500+1
  [3830e938] Cgl_jll v0.6000.200+0
  [06985876] Clp_jll v100.1700.600+0
  [be027038] CoinUtils_jll v2.11.4+2
  [f09e9e23] Couenne_jll v0.5.8+0
  [9cc047cb] Ipopt_jll v3.13.4+2
  [d00139f3] METIS_jll v5.1.0+5
  [d7ed1dd3] MUMPS_seq_jll v5.2.1+4
  [656ef2d0] OpenBLAS32_jll v0.3.12+1
  [efe28fd5] OpenSpecFun_jll v0.5.5+0
  [7da25872] Osi_jll v0.108.6+2
  [0dad84c5] ArgTools
  [56f22d72] Artifacts
  [2a0f44e3] Base64
  [ade2ca70] Dates
  [8bb1440f] DelimitedFiles
  [8ba89e20] Distributed
  [f43a241f] Downloads
  [7b1f6079] FileWatching
  [9fa8497b] Future
  [b77e0a4c] InteractiveUtils
  [b27032c2] LibCURL
  [76f85450] LibGit2
  [8f399da3] Libdl
  [37e2e46d] LinearAlgebra
  [56ddb016] Logging
  [d6f4376e] Markdown
  [a63ad114] Mmap
  [ca575930] NetworkOptions
  [44cfe95a] Pkg
  [de0858da] Printf
  [9abbd945] Profile
  [3fa0cd96] REPL
  [9a3f8284] Random
  [ea8e919c] SHA
  [9e88b42a] Serialization
  [1a1011a3] SharedArrays
  [6462fe0b] Sockets
  [2f01184e] SparseArrays
  [10745b16] Statistics
  [fa267f1f] TOML
  [a4e569a6] Tar
  [8dfed614] Test
  [cf7118a7] UUIDs
  [4ec0a83e] Unicode
  [e66e0078] CompilerSupportLibraries_jll
  [deac9b47] LibCURL_jll
  [29816b5a] LibSSH2_jll
  [c8ffd9c3] MbedTLS_jll
  [14a3606d] MozillaCACerts_jll
  [05823500] OpenLibm_jll
  [83775a58] Zlib_jll
  [8e850ede] nghttp2_jll
  [3f19e933] p7zip_jll

Around 8000 variables with the constraints of the type shown in the example. Changing the data() function code to the following should give an idea regarding the size of the problem. But the matrix c1 in real data is sparse and in the MWE below is dense, so this example will take even longer to run.

It is just not Ipopt that is taking long, but I tried KNITRO as well and it never got to the stage of starting the optimisation/solving. But when used with AmplNLWriter it worked.

Number of nonzeros in equality constraint Jacobian…: 24003
Number of nonzeros in inequality constraint Jacobian.: 9600000
Number of nonzeros in Lagrangian Hessian…: 16003

Total number of variables…: 8003
variables with only lower bounds: 0
variables with lower and upper bounds: 8003
variables with only upper bounds: 0
Total number of equality constraints…: 3
Total number of inequality constraints…: 1200
inequality constraints with only lower bounds: 1200
inequality constraints with lower and upper bounds: 0
inequality constraints with only upper bounds: 0

function data()
    p = rand(400:700,8000,1);   c1= (rand(100:200,8000,1200))'; c2 = 0.9 .* c1
    b =rand(150:250,1200,1);  n= size(c1,2);  k = size(c1,1)
    return p, c1, c2, b, n, k
end

Are you setting any options?

I think I’ve found the culprit: https://github.com/jump-dev/AmplNLWriter.jl/issues/147

Can you try:

model = Model(() -> AmplNLWriter.Optimizer(Ipopt_jll.amplexe, ["print_level=5"]))

I’ll take a look at the JuMP issue. That seems like something we should fix. (Although there are 10^7 nonzeros in the Jacobian?!)

Ah. I see you’ve added the #master branch. Try https://github.com/jump-dev/AmplNLWriter.jl/pull/148

] add AmplNLWriter#odow-patch-1

I’ve also taken an initial look at the performance problem and nothing obvious sticks out.

I’ve opened an issue for once I get time to work on nonlinear stuff: Slow evaluation of nonlinear callbacks · Issue #2788 · jump-dev/JuMP.jl · GitHub

Thank you. No, I was not using any options.
Using model = Model(() -> AmplNLWriter.Optimizer(Ipopt_jll.amplexe, ["print_level=5"])) sorted the issue.

That log was from the MWE. For a real problem, part of the log is shown below. As you will see that Ipopt itself only takes less than a minute but total time measured using @time is 328 seconds. Most of the time is spent before it displays the log, so I am assuming this time is spent in automatic differentiation- reading data and building the model in total was less than 20 seconds in total when I timed it.

Retrieving results is also slow for large problem. It takes roughly 30 seconds or something to retrieve results after the log shows “EXIT: Optimal Solution Found”.

The JuMP issue is not unique to Ipopt but it is for all nonlinear solvers that AmplNLWriter is fast. For a problem of the size below, Ipopt was not able to make any progress when called directly with JuMP.

Number of nonzeros in equality constraint Jacobian...:    32431
Number of nonzeros in inequality constraint Jacobian.:  1056594
Number of nonzeros in Lagrangian Hessian.............:    16214

Total number of variables............................:     8113
                     variables with only lower bounds:        0
                variables with lower and upper bounds:     8112
                     variables with only upper bounds:        1
Total number of equality constraints.................:        5
Total number of inequality constraints...............:      135
        inequality constraints with only lower bounds:      135
   inequality constraints with lower and upper bounds:        0
        inequality constraints with only upper bounds:        0


Number of objective function evaluations             = 52
Number of objective gradient evaluations             = 52
Number of equality constraint evaluations            = 52
Number of inequality constraint evaluations          = 52
Number of equality constraint Jacobian evaluations   = 52
Number of inequality constraint Jacobian evaluations = 52
Number of Lagrangian Hessian evaluations             = 51
Total CPU secs in IPOPT (w/o function evaluations)   =     18.442
Total CPU secs in NLP function evaluations           =     39.131

328.187966 seconds (487.29 M allocations: 36.987 GiB, 24.39% gc time, 0.41% compilation time)

Can you post the real example in more detail? It’d be nice to know why this is happening.

It is hard to generate random data that is representative of the actual problem- it will need to be of similar number of variables and constraints as a smaller size data will not make the help to diagnose the issue. I will try but it may take me few days before I can post it. The code is pretty much similar to the original post

Even if you can’t share the data, it’d be good to see the code that you are running. (You can send me a private message if you don’t want to post publicly.)

Which part were you timing? What if you run it twice? Are there global variables?

I will send you the code in a private message. I need to remove few things from it before I can share.

I timed (i) how long it takes to read data; (ii) time to generate the model (before calling optimize!) and (iii) time to solve model. I just used a print command to print time at each of these steps which makes it run even slower. The 328 seconds time is the total end to end time (without these extra print statements). Re-running the code didn’t made much difference.

I have run the code on a smaller problem and the run time was about 3 - 4 seconds at the second run (including time to read data).

No, there are not any global variables.