It isn’t, but I see why it looked like so, sorry. The more complete question should have been: “what is the statistic used to assess the mean of a coefficient in a LM, from which I can then calculate the standard error?”. I have taken a Stats course at university but it didn’t cover linear regression. Anyway, I have probably found the answer, and it’s already too advanced for my curiosity-motivated study, so I think I’ll just trust the software library and use the results.
If you move quickly away from Frequentist stats and towards Bayesian stats then the answer is always very simple: everything is derived from the posterior distribution.
The frequentist tests for regression stuff can mostly be seen as approximations to Bayes under some improper prior distribution.
I just always do Bayes, but sometimes do GLM type stuff and interpret as convenient quick approximation of Bayes.
To @pdeffebach : thanks, but my question was about the test performed: are you suggesting that because that is part of the OLS standard process?
To @dlakelan : I still didn’t grasp the difference between frequentist and Bayesian statistics. Is it important to conduct a multiple linear regression? Or can I “just trust Julia” (and let’s say, accept the parameter when Pr(>|t|) < 0.05 as usual)?
This accept and reject stuff is definitely what’s wrong with much of Frequentist stats. For example, if you have Pr(>|t|) = 0.07 will you “accept that the slope really is zero?” That is a very poor way to do things. The proper interpretation is rather that you have insufficient information to ensure exactly what sign the slope should be. In the real world almost nothing is exactly 0. And simply because you have a small sample size is no reason to conclude strongly that a parameter is actually 0. Similarly if in one dataset p<0.05 and another p>0.05 it is very wrong to say in condition one the parameter is not zero but rather equal approximately to the estimated value, and in condition two the parameter is exactly 0 and therefore the estimate of the difference of the effects is such and such…
It is worth it to avoid falling into the many many logical fallacies that are committed by the nonspecialist using the usual rituals of Null Hypothesis Significance Testing.
If you have not already had too much standard stats education you are in a good position to avoid making these mistakes perhaps look into Kruszke’s “Doing Bayesian Data Analysis” or some other similar very intro book. Mainly to build up a proper intuition for valid inferences rather than many fallacies.
Thanks for the suggestion! Yes I know that “not rejecting the null hypothesis” doesn’t mean “the null hypothesis is true”, but you’re right, knowing myself I would have got distracted and assumed it
My question (a bit too pragmatical, I admit), was “is this statistic sufficiently solid to trust the usual significance level (0.05) in a normal regression problem?”. Whose answer, I get now, is “it depends”.
One minor thing: wdym here?
Because I would have said, “since I got only basic stats educations, I am especially prone to error”
I mean, it will be easier for you to unlearn the wrong thinking you were taught in 1 semester than the wrong thinking you have developed over several years of a stats masters etc.