Which parameter estimation technique should I learn?

I would like to ask the Julia community which Parameter Estimation package should I learn/use for my following problem:

I have defined a ODE that has over 20 fixed parameters and there is 4 I would like to optimise. I get as output 4 other variables. The time span ranges from 1 to 120. I have some data that contain the actual solution of 2 of those output variables at every 30 time intervals (i.e 0,30,60,90,120). I would like to fit my solution according to this data. Which Julia package would be my best option?

DiffEqFlux seems interesting, but does having only 5 datapoints be enough estimate the parameters? Or should I look more into the simpler techniques like lsoptim?

Certainly consider using Turing; given that you have very few datapoints the uncertainty estimates will surely be helpful.

Thereā€™s an example of how to combine the two in the Turing documentation. Using this code as a starting point you can get going pretty quickly.

So I have looked at some of the tutorials in the Turing page, there are a few usages I do not fully understand.


In this section, what does the Ļƒ represent? Because it is not part of the differential equation. Also what is the use of the for loop within the fitlv function?
Finally, what does the captial I variable represent. I donā€™t see where it was initialised.

Thanks in advance.

1 Like

I comes from the LinearAlgebra standard library module (i.e., using LinearAlgebra brings I into scope), and in this context it represents the identity matrix.

Iā€™ve never used (or looked at) Turing.jl before, but based on my knowledge of statistics/estimation, it looks like

for i in 1:length(predicted)
    data[:, i] ~ MvNormal(predicted[i], Ļƒ^2 * I)
end

is adding i.i.d. (independent and identically distributed) Gaussian noise (with standard deviation Ļƒ) to each of the predicted values (hence the loop), and it looks like the problem is assuming the noise standard deviation is unknown and thus needs to be jointly estimated with the other model parameters.

Probably someone with Turing.jl experience should confirm if what I stated is correct.

1 Like

The for loop is explaining how each entry in the data is modelled according to the model. The model says that the output of the Lotka-Volterra model, when using current parameters p, is probably close to the real data but with some noise which is normally distributed with variance Ļƒ^2. The higher the variance needs to be to make predicted a likely outcome, the worse the parameters p are. If Ļƒ^2 can be very small and the model is still accurately predicting the data, then our parameters are very good and very likely to be accurate.

Note when I say ā€œaccurately predicting the dataā€, Iā€™m not saying that our predictions would change if we changed Ļƒ^2 ceteris paribus, it would just change how likely the prediction is compared to the data.

Take a look at our SIR model repository - there are four tutorials (including Turing.jl) for fitting ODEs. The ā€˜bestā€™ approach is very model-dependent, so itā€™s best to try multiple ones.