Thanks for the tips! Using save_idxs=1
helped a lot to reduce allocations, and increased the performance by almost 40%. Now @times
gives
1.000711 seconds (84.00 k allocations: 485.321 MiB, 1.23% gc time)
I plan to use MCMC to sample from the parameters’ joint posterior to do full-level UQ, so for now I stick with ODE approach at least to obtain reference results with minimum possible of bias as possible. In the batch-likelihood approach, do I have to just evaluate the likelihood indeed only at the batch points and otherwise do everything the same, or do I have to do take the batch-likelihood into account somehow?
How about the spline method? It seems that the method fits a curve to the smoothed data and uses an extra penalty function with respect to the ODE function to do that. For me, it sounds that the likelihood of the spline method would be greatly different, or at least it would be difficult to express the posterior in the form P(obs | param)P(param), where likelihood P(obs | param) is the same as in the ODE approach.
Nevertheless, solving this example system 1000 times at 10000 points in 1 second is a good result already