MAP optimization in Turing with LBFGS doesn't move at all

I have some further info.

By following procedures here: Recommended way to extract logprob and build gradients in Turing.jl?

I was able to find that yes indeed it does calculate the log density and the gradient at the initial condition, and that the printing I see is probably due to that initial gradient eval.

It still doesn’t work.

However I’m running an IPNewton() run and it’s taking much much more computing time… so I have hope that it will suddenly give me a meaningful answer! I will probably be crushed when it returns something meaningless, but I have been succeeding in debugging a lot of stuff anyway by utilizing the BlackBoxOptim optimizers and thinking hard about the problem and trying lots of stuff… so I’ll post back here and let people know if IPNewton worked.

Ok, IPNewton took many many minutes, and then returned the initial conditions.

Any suggestions for alternatives? I thought about trying NLopt algorithms but every time I tried with lower and upper bounds it complained that it wouldn’t work.

Added info: now I can do NLOpt algorithms somehow even with lower and upper bounds. I’m not at all sure why!

  1. LD_LBFGS aborts with failure to converge quickly, similar to LBFGS() from Optim.jl
  2. LN_NELDERMEAD runs for a while, moves a couple of the axes, but in the end accomplishes not much
  3. LD_TNEWTON fails
  4. LN_COBYLA runs a while and didn’t do much but didn’t fail

I tried various others and had very little in the way of success. I’m in the process of trying LN_BOBYQA at the moment.