Last time: | Next time: |
Today:
Bit of a teaser/thought for the day: we have to be careful not to confound "numerical analysis" and "programming".
You have various "implementations" of bisection. The idea behind bisection is well-defined. One can conduct bisection as a "thought experiment", without any code at all.
How many iterations of bisection are required to know which root bisection will converge to for $f(x)=(x+1)x(x-1)$ on the interval $[-\sqrt{2}, \pi/2]$?
Notice that I've also taken care of the problem we saw last time, through a log identity. I observed that my fake $\ln$ seemed well behaved on $[0,1]$. Can I calculate $\ln$ well on the interval $(0,1]$ to get values for $\ln(x)$ when $x>1$?
You might notice how very few cubics we need to do a really great job approximating $e^x$, compared to the number of linear functions.
What error are we making when we use Hermite interpolation? Proposition 5.6 gives us the answer (p. 219):
which is equal to a derivatives times some stuff:
where $\xi$ is some unknown number between the minimum and maximum of $\{x_1,x_2,x\}$. It will be maximized in the middle, as we have seen, so a bound on the error for the Hermite spline approximation to $e^x$ is
While my Hermite spline does the job (approximating $e^x$ for all real numbers), one might do the same job with Taylor polynomials, for example.
You won't be surprised to learn that the Taylor Series is the key to analyzing some of the most obvious choices.
Our objective is "adaptive quadrature", section 7.5. If you understand how to do that, I'll be happy! It's way cool.
The good news is that some methods will already be familiar; the error analysis part will probably be new, however.
The authors define the truncation error as the difference of the approximation and the true value:
and the power of $h$ on this error will be called the order of accuracy.
The text illustrates this in Figure 7.1, p. 256, as the slope of a secant line approximation to the slope of the tangent line (the derivative at $x=a$).
We explore the consequences using the Taylor series expansion:
What is the order of accuracy of this method?
We call this a forward definition, because we think of $h$ as a positive thing. It doesn't have to be however; but we define the backwards-difference formula (obviously closely related) as
which comes out of
which we can again attack using Taylor series polynomials. What is the order of accuracy of this method?
We can also derive it as an average of the two lop-sided schemes! We might then average their error terms to get an error term. However, what do you notice about that error term?
Let's think about computing with error.
It will turn out that the truncation error of a scheme like these will look like
where $M>0$ is a bound on the derivative of interest on the interval of interest (second in this case), and $\epsilon>0$ is a bound on the size of a truncation or round-off error.