Last time: | Next time: |
Today:
The good news is that Bezier Curves are "native" to Mathematica. I want you to understand them, however.
If you read your short reading assignment (this description of Bezier splines) then you'll know the role of each of these points, and the form of the curve:
Notice how the cubic functions of "get out of the way" at the appropriate moments.
You won't be surprised to learn that the Taylor Series is the key to analyzing some of the most obvious choices.
Our objective is "adaptive quadrature", section 7.5. If you understand how to do that, I'll be happy! It's very cool.
The good news is that some of this will already be familiar; the analysis part will probably be new, however.
The authors define the truncation error as the difference of the approximation and the true value:
and the power of on this error will be called the order of accuracy.
The text illustrates this in Figure 7.1, p. 256, as the slope of a secant line approximation to the slope of the tangent line (the derivative at ).
We explore the consequences using the Taylor series expansion:
What is the order of accuracy of this method?
We call this a forward definition, because we think of as a positive thing. It doesn't have to be however; but we define the backwards-difference formula (obviously closely related) as
which comes out of
which we can again attack using Taylor series polynomials. What is the order of accuracy of this method?
We can also derive it as an average of the two lop-sided schemes! We might then average their error terms to get an error term.
Question: what do you notice about that error term so derived?
We might focus on these three equations:
Now: we want a linear combination of the points on the left-hand side that "equals" (approximates) :
Throw away (for the moment) the error terms:
Then the linear combination becomes
This creates a linear system for three equations in three unknowns.
Then, once we know , we can go back and determine the error from the linear system (before we threw away the error terms).
Let's think about computing with error.
It will turn out that the truncation error of a scheme like these (in this case for the forward difference) will look like
where is a bound on the derivative of interest on the interval of interest (second in this case), and is a bound on the size of a truncation or round-off error.
We can illustrate this by adding artificial error.
Question: How would things change if we were using the centered difference?
We want to evaluate
You encountered the following rules back in calculus class, as you may recall:
We actually usually start with left- and right-rectangle rules, and then consider the trapezoidal rule as the average of these. Simpson's can be considered an average of the trapezoidal and the midpoint rules.
As the authors point out, the first two methods (trapezoidal and midpoint) can be considered integrals of linear functions.
The trapezoidal method is the integral of a linear interpolator of two endpoints, and midpoint is the integral of the constant function passing through -- well, the midpoint of the interval!
This illustrates one important different application of these two methods:
These methods are both examples of what are called "Newton-Cotes methods" -- trapezoidal is a "closed" method, and midpoint is "open".
In the end we paste these methods together on a "partition" of the interval of integration -- that is, multiple sub-intervals, on each one of which we apply the simple rule. This pasted up version is called a "composite" rule.
We're going to start by assuming a partition of the interval that is equally spaced. But in the end, we should let the behavior of help us to determine the the partition of the interval:
The maximum subinterval width ()
goes to zero. The partition gets thinner and thinner, as the number of
subintervals (n) goes to .
The approximations get better and better, of course, as n) goes to . Notice that the larger subintervals occur where the function isn't changing as rapidly. This scheme would be called "adaptive", adapting itself to the function being integrated. That's what we're going to shoot for. |
Of particular interest in this section is the authors' very beautiful demonstration that Simpson's rule is the weighted average of the trapezoidal and midpoint methods.
In particular, that the midpoint method is twice as good as the trapezoidal method, and that the errors tend to be of opposite sign. This analysis is based off of an approximating parabola (see Figure 7.10, p. 271).
In that figure that bottom line is the tangent line at the midpoint (as suggested in paragraph above). The slope of that tangent line is actually equal to the slope of the secant line there, for a parabola. So the difference in those two slopes indicates the extent to which a parabolic model for fails on that section.