Last time: | Next time: |
Today:
The authors define the truncation error as the difference of the approximation and the true value:
and the power of $h$ on this error will be called the order of accuracy.
The text illustrates this in Figure 7.1, p. 256, as the slope of a secant line approximation to the slope of the tangent line (the derivative at $x=a$).
We explore the consequences using the Taylor series expansion:
What is the order of accuracy of this method?
We call this a forward definition, because we think of $h$ as a positive thing. It doesn't have to be however; but we define the backwards-difference formula (obviously closely related) as
which comes out of
which we can again attack using Taylor series polynomials. What is the order of accuracy of this method?
We can also derive it as an average of the two lop-sided schemes! We might then average their error terms to get an error term. However, what do you notice about that error term?
Let's think about computing with error.
It will turn out that the truncation error of a scheme like these will look like
where $M>0$ is a bound on the derivative of interest on the interval of interest (second in this case), and $\epsilon>0$ is a bound on the size of a truncation or round-off error.
We want to evaluate
You encountered the following rules back in calculus class, as you may recall:
We actually usually start with left- and right-rectangle rules, and then consider the trapezoidal rule as the average of these. Simpson's can be considered an average of the trapezoidal and the midpoint rules.
As the authors point out, the first two methods (trapezoidal and midpoint) can be considered integrals of linear functions.
The trapezoidal method is the integral of a linear interpolator of two endpoints, and midpoint is the integral of the constant function passing through -- well, the midpoint of the interval!
This illustrates one important different application of these two methods:
These methods are both examples of what are called "Newton-Cotes methods" -- trapezoidal is a "closed" method, and midpoint is "open".
In the end we paste these methods together on a "partition" of the interval of integration -- that is, multiple sub-intervals, on each one of which we apply the simple rule. This pasted up version is called a "composite" rule.
We're going to start by assuming a partition of the interval $[a{,}b]$ that is equally spaced. But in the end, we should let the behavior of $f$ help us to determine the the partition of the interval:
![]() |
![]() |
![]() |
The maximum subinterval width ($||{P}||$)
goes to zero. The partition gets thinner and thinner, as the number of
subintervals (n) goes to $\infty$.
The approximations get better and better, of course, as $n \longrightarrow \infty$. Notice that the larger subintervals occur where the function isn't changing as rapidly. This scheme would be called "adaptive", adapting itself to the function being integrated. That's what we're going to shoot for. |
Of particular interest in this section is the authors' very beautiful demonstration that Simpson's rule is the weighted average of the trapezoidal and midpoint methods.
In particular, that the midpoint method is twice as good as the trapezoidal method, and that the errors tend to be of opposite sign. This analysis is based off of an approximating parabola (see Figure 7.10, p. 271).
In that figure that bottom line is the tangent line at the midpoint (as suggested in paragraph above). The slope of that tangent line is actually equal to the slope of the secant line there, for a parabola. So the difference in those two slopes indicates the extent to which a parabolic model for $f$ fails on that section.