(sorry for the long post)
For the past 6 months or so, on and off, I’ve been trying to estimate a finite horizon dynamic discrete choice model. If you are not familiar with it, suffices to say that for every step of the optimization problem, I need to solve a big, complicated and highly nonlinear model which involves Monte Carlo integration, to evaluate my likelihood function.
Now, I’ve been using the
Optim package unsuccessfully. I’ve played around with LBGFS, Newton, automatic and finite differentiation, tweaking LineSearches and whatnot, and the estimation never converges. Of course, it could be a bug in my likelihood function, but I am 100% certain that the part that solves the model is correct, and there are 2 codes of the same estimation procedure written in Fortran and in Python that I’ve cross-checked line by line and the likelihood computation seems to be correct.
A colleague of mine said that maybe I should attempt using a non linear solver like Ipopt, and this type of problem seems to be infeasible to be put into
JuMP, as it involves thousands of observations and around 30 parameters.
So, I’ve been looking into MathOptInterface, and the documentation is very dry. I am a newbie in programming in general, so the language in the documentation is quite abstract and difficult for me. I can implement the knapsack example, but when looking at the test files for nonlinear optimization example, I have no idea what is going on and can’t even run the code.
Is there any example of a nonlinear optimization example which uses a user-defined objective function and automatic differentiation instead of explicit gradients, and that can be run self-contained?
If not, what would be the best way to really understand what is going on with MathOptInterface? The documentation does not explain how to use user-defined functions, for example. How can I learn about this?