7.3 Testing Individual Coefficients
Another standard test used in regression analysis is to test whether our coefficients are sigificantly different from 0, indicating that there is some significant relationship to the response from the corresponding covariate. (This does NOT indicate a causal relationship, only a significant association.)
Let’s test \(H_0: \beta_1 = 0\) versus \(H_a: \beta_1 \neq 0\). Note this is a two-tailed test. Recall that if the null hypothesis holds, then \[\frac{\hat{\beta}_1}{\text{se}\left({\hat\beta_1}\right)} \sim t_{n-2}\] where \(\text{se}\left(\hat\beta_1\right)\) is the standard error of \(\hat\beta_1\) and \(t_{n-2}\) denotes the \(t\)-distribution with \(n-2\) degrees of freedom.
Looking at the regression summary output:
- What are the results of this test for our one predictor
Y.2012
? - What is the relationship between the \(t\)-statistic for this coefficient, and the \(F\)-statistic for the omnibus test?
- Remembering what we discussed earlier about interpreting the value of the intercept, note the statistical test result here. What is the null hypothesis being tested here for the intercept? What does the result indicate?