EDIT: There is now PR to add this feature to Turing
This might be a very silly question… but assuming the following simple linear model:
using Turing
using DataFrames
using LinearAlgebra
x = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
y = [0, 0.6, 1, 1.4, 2, 2.8, 3, 3.3, 4, 4.6]
@model function lm(y, x)
# Set variance prior.
σ² ~ truncated(Normal(0, 100); lower=0)
intercept ~ Normal(0, sqrt(3))
coefficient ~ TDist(3)
# Calculate all the mu terms.
mu = intercept .+ x * coefficient
y ~ MvNormal(mu, σ² * LinearAlgebra.I)
end
One can estimate the MLE parameters using Optim.jl:
using Optim
map_estimate = optimize(lm(y, x), MAP())
ModeResult with maximized lp of -0.84
[0.01558797630961074, 0.025930064026905057, 0.4986792139399884]
In the Turing examples, this is mentioned as a mean to find starting values. However, I was wondering if this feature could be used for other purposes, in particular to make non-Bayesian parameter estimations.
Is it possible to compute for this Turing model other indices traditionally associated with ML estimation, such as confidence intervals, p-values, etc.? Perhaps implementing a bootstrapping procedure? Thanks for any thoughts!