Last time | Next time |
In the original problem I said that you should be able to reproduce any mark on the stick -- now's your chance to show that you can.
Again, the main advantage of this approach is that it can be easily generalized to models containing many parameters.
Then we'll check the Keeling results using Mathematica and some linear algebra, to see that we get the same fit to Stewart's Keeling data as we did using Mathematica's LinearModelFit.
How would you adjust things to get the quadratic fit? Let's try!
There are several things I always consider when creating a regression model:
If I see no pattern in the data, then I quit; otherwise, I choose my model and fit it.
The standard errors of the parameters pop out of the inverse matrix we compute, multiplied by the mean SSE. Once we have those, we have everything we need for confidence intervals.
We're often interested to know whether we can exclude a certain value from a confidence interval -- e.g., can we conclude that the slope parameter $b$ in a linear regression $y(x)=a+bx$ is not 0? If so, we conclude that there is a non-zero slope, and the model suggests that $x$ drives values up or down, depending on the sign of $b$.