It’s here: Home · GLM
coef(reg)[2] / stderror(reg)[2]
The only thing is the range of output. It seems like in Python, the values are multiplied by a constant value > 1 (it seems that the final
BSADF
values in Julia should be multiplied by 10):
This I do not understand. However, I took the OLS equations from the top of my head, and they work fine in a toy example:
using LinearAlgebra, GLM
X = [
1 3 4
2 8 1
4 5 2
6 7 2
9 4 3
]
y = [5,4,6,2,3]
Xc = hcat(ones(size(X,1)), X)
# GLM:
regglm = glm(Xc, y, Normal(), IdentityLink())
# easy fit
reglm = fit(LinearModel, Xc, y)
#manual
cvmat = inv(Xc' * Xc)
cf = cvmat * Xc' * y
resid = norm(Xc * cf .- y)
se = sqrt.(diag(cvmat)) * resid
println("GLM:\n", regglm)
println("\n\nLM:\n", reglm)
println("\n\nMAN:")
[cf se]
and the same in python:
import numpy as np
import statsmodels.api as sm
X = np.array([[1, 3, 4],[2,8,1], [4,5,2], [6,7,2], [9,4,3]])
y = np.array([5,4,6,2,3])
reg = sm.OLS(y,sm.add_constant(X))
res = reg.fit()
res.params
res.bse
They are all equal.