Weighted linear regression with confidence interval fitted to error bars

As indicated in GLM.jl’s doc above, glm does not handle inverse-variance weighting.

For this purpose, you may use LsqFit.jl.

using DataFrames, LsqFit, Printf, Plots; gr()

df = DataFrame(x = [0.0, 0.0669873, 0.25, 0.5, 0.75, 0.933013, 1.0],
        y = [0.223, 0.291, 0.393, 0.549, 0.73, 0.85, 0.896],
        u_y = [0.023, 0.024, 0.027, 0.031, 0.037, 0.041, 0.043])

x, y = df.x, df.y
wt = 1 ./ df.u_y .^2

p0 = [0.5, 0.5]
m(x, p) = p[1] .+ p[2] * x         # p: model parameters
fit = curve_fit(m, x, y, wt, p0)

cf = coef(fit)
ci = confidence_interval(fit, 0.05)    # 5% significance level

str = @sprintf("Y = (%.2f +/- %.2f) + (%.2f +/- %.2f)*X",
          cf[1],diff([ci[1]...])[1]/2, cf[2],diff([ci[2]...])[1]/2)

tl, bl = ci[1][1] .+ ci[2][2]*x,   ci[1][2] .+ ci[2][1]*x
σp, σm = maximum([tl bl], dims=2) .-  m(x,cf),  m(x,cf) .- minimum([tl bl], dims=2)

plot(x, cf[1] .+ cf[2]*x, color=:lightblue, ribbon=(σp,σm), label=str)
plot!(x, cf[1] .+ cf[2]*x, color=:blues, lw=1, label=false, xlabel="X",ylabel="Y")
scatter!(x,y, ms=3,label=false,mc=:blue, yerror=df.u_y, legend=:topleft)

LsqFit_weighted_linear_regression

6 Likes