Last time: | Next time: |
Today:
Last time we showed that we need around 600 points ($ceiling\left(\sqrt{\frac{8 \times 10^{-6}}{e}}\right)=583$) on the interval $[0,1]$ to compute $e^x$ with an error of at most $10^{-6}$.
Now, we've got something cool: a really good approximation of $e^x$ for all $x$ on [0,1]. Let's do some more:
Here's another picture of how we might build one of these.
Question: if we don't have a function, e.g. just data, how might we proceed?
I can think of several ways of approaching the fitting of this data with the Hermite interpolant. How might you go about it?
You might not be surprised to learn that about the easiest formulation is Newton's, only we're going to repeat some abscissa:
Since we'll need this over and over, we should go ahead and simplify those divided difference coefficients, using the assumption that $f'$ is known (use the recurrence relation at the top of p. 215 to do so).
This will have the effect of fitting the two endpoints, and the derivatives of $f$ at the two endpoints.
I'd like to discuss my implementation of this in Mathematica.
What error are we making when we use Hermite interpolation? Proposition 5.6 gives us the answer (p. 219):
which is equal to a derivatives times some stuff:
where $\xi$ is some unknown number between the minimum and maximum of $\{x_1,x_2,x\}$.
While my Hermite spline does the job (approximating Sine for all real numbers), one might do the same job with Taylor polynomials.
You won't be surprised to learn that the Taylor Series is the key to analyzing some of the most obvious choices.
Our objective is "adaptive quadrature", section 7.5. If you understand how to do that, I'll be happy! It's very cool.
The good news is that some of this will already be familiar; the analysis part will probably be new, however.
The authors define the truncation error as the difference of the approximation and the true value:
and the power of $h$ on this error will be called the order of accuracy.
The text illustrates this in Figure 7.1, p. 256, as the slope of a secant line approximation to the slope of the tangent line (the derivative at $x=a$).
We explore the consequences using the Taylor series expansion:
What is the order of accuracy of this method?
We call this a forward definition, because we think of $h$ as a positive thing. It doesn't have to be however; but we define the backwards-difference formula (obviously closely related) as
which comes out of
which we can again attack using Taylor series polynomials. What is the order of accuracy of this method?
We can also derive it as an average of the two lop-sided schemes! We might then average their error terms to get an error term. However, what do you notice about that error term?
Let's think about computing with error.
It will turn out that the truncation error of a scheme like these will look like
where $M>0$ is a bound on the derivative of interest on the interval of interest (second in this case), and $\epsilon>0$ is a bound on the size of a truncation or round-off error.