Any possible reason a NLP model solving in GAMS and C++, doesn't in JuMP?



Hello, we have a rather complex non linear model (~5000 var, 1300 constraints) that solves in ~100 iterations in both GAMS (Conopt or IPOPT) and C++ (IPOPT/ADOL-C/Colpack stack)
We are now converting it in JuMP, but, on the same Linux pc (hence, on the same IPOPT version than C++), it doesn’t solve, even if I scale it down (e.g. I simulate fewer regions).

Aside the possibility of some residual typos (still my bet), could there be any reason why a problem may not solve in JuMP and does it on GAMS/C++ ?


I’m not aware of such limitations. Did you get any errors?

You could also try to take the GAMS solution and set it as the initial point in JuMP. This should clear up typos…


Have you checked the solver logs that the generated problem size is identical in GAMS and JuMP? It could be that GAMS is doing some kind of presolve that simplifies the problem, or takes better advantage of sparsity. In JuMP you have to do these things manually I believe.


Hello, at the end it was my typos.
Still, now the model solves and returns the same results, but the iterations are much more, ~800 in place of ~100 or less.

One question… when I change parameters and resolve the model, the initial starting point of the endogenous variables are those from the previous optimisation or those assigned when the variable has initially been declared? If so, is it possible to assign new starting point(s) ?


From the previous optimization.


In particular I noticed that the GAMS version accepts without problems equations like these in the obj:

myPar[r,tp]*demand[r,tp]^((sigma[tp] +1) / sigma[tp] )

wile in Jump (but also in C++) I need to be sure the base of the power is non-negative, even if I bound the relevant variable to 0 or 0.0000001:

JuMP.register(m, :mymax, 2, max, autodiff=true)
myPar[r,tp]*mymax(0.0,demand[r,tp])^((sigma[tp] +1) / sigma[tp] )

Also, I did try to pass the option scaling_factor="gradient-based" to IPOPT, but it doesn’t accept it…