Last time | Next time |
If your group has any questions, feel free to stop by and talk to me!
Monday's "first contact" introduced vector and matrix notation, and a few important operations (e.g. transpose and inverse). But the details will be left to a linear algebra course!
Matrices will be important in other models we consider, so let's be glad we meet them early.
How would you adjust things to get the quadratic fit? Let's try!
There are three things I always consider about a model:
The standard errors of the parameters pop out of the inverse matrix we compute, multiplied by the mean SSE. Once we have those, we have everything we need for confidence intervals.
We're often interested to know whether we can exclude a certain value from a confidence interval -- e.g., can we conclude that the slope parameter $b$ in a linear regression $y(x)=a+bx$ is not 0? If so, we conclude that there is a non-zero slope, and the model suggests that $x$ drives values up or down, depending on the sign of $b$.
Alexander also illustrates that one cannot simply invert the regression equation $y=a+bx$ to get the regression equation $x=\frac{y-a}{b}$. So it really matters in linear regression which variable is considered "independent" and which is considered "dependent".