MAP optimization in Turing with LBFGS doesn't move at all

oh interesting. I’m using NLopt through the Turing interface that uses Optimization.jl underneath.

I’m tearing my hair out a little… Yes it’s taking a sample of my data and then fitting the model. So the sample could change for sure. But then if I get a “good” seed it seems like it doesn’t matter which size sample of data I use, and since the size means different random numbers are used anyway, it’s just confusing as heck… I guess random numbers can be that way. I’ll look into it again.

If your problem is stochastic, all bets are off. Those methods aren’t made for such problems at all.

No it’s not stochastic (in the sense that the objective changes during the optimization). It’s just that I take my big sample of data, subsample it, and then try to fit the model. So if the seed matters and the optimization method is deterministic, then it suggests the sample it chooses determines whether the problem can be solved by the optimizer or if the optimizer gives up at the initial conditions. This just makes everything harder to figure out :sweat_smile: