I’m working on one of my Data Analysis tutorials. This one involves fitting a nonlinear function to an economic time series.
The basic idea is that I have a radial basis function + some annual periodicity + a step function that handles the recent COVID shock.
I’m first running an optimize to try to get a decent fit, and then using that as a starting point for bayesian sampling…
ONCE I got it to fit a really reasonable fit… but every time since then, this is the kind of thing I see:
This is often after TENS OF MINUTES of optimizing (100k iterations)
Where red is the model and blue is the data.
I’ve tried various methods in the Turing optimize, such as LBFGS() and ParticleSwarm and whatnot. In general it does a terrible job.
Turing also doesn’t sample well at all. It usually is stuck out in the weeds just like the optimization.
I assume this is probably because of local optima. Is there a way to use BlackBoxOptim with Turing models? How about supplying an initial condition? If I supplied 0 for the RBF function it’d be way closer to the optimum than even the result of this minutes of computing.
Also, it’d be useful to be able to take the result of one of these optimizations and perturb it, and use that as a new starting point for another optimization. Is there a way to specify an initial value?
EDIT: Note that the shape of the function is linear in everything except the location of the shock… and all the priors are normal except the prior on the standard deviation of the error, so it feels like it should be relatively easy to optimize. I don’t understand why this is struggling so hard.