Last time: | Next time: |
Today:
I've given you a short reading assignment (this description of Bezier splines); I hope that you'll check it out before I go over the project next time.
This was used to create a meaning a novel different divided difference
, which is actually
undefined if we think of it as a ratio of difference -- but this
formula allows us to think of it as the appropriate derivative
evaluated at
, so we define
it so:
, since it must be
between the minimum and maximum of
.
This is a locally determined spline -- we simply stitch together one cubic with the others, and we have a smooth curve running through a function.
What error are we making when we use Hermite interpolation? Proposition 5.6 gives us the answer:
which is equal to a derivatives times some stuff:
where is some unknown
number between the minimum and maximum of
.
What then is a bound on the error that we're making using my Hermite
spline on the interval for sine, given 5 equal
subdivisions of the interval?
You won't be surprised to learn that the Taylor Series is the key to analyzing some of the most obvious choices.
Our objective is "adaptive quadrature", section 7.5. If you understand how to do that, I'll be happy! It's very cool.
The good news is that some of this will already be familiar; the analysis part will probably be new, however.
The authors define the truncation error as the difference of the approximation and the true value:
and the power of on
this error will be called the order of accuracy.
The text illustrates this in Figure 7.1, p. 256, as the slope
of a secant line approximation to the slope of the tangent line
(the derivative at ).
We explore the consequences using the Taylor series expansion:
What is the order of accuracy of this method?
We call this a forward definition, because we think of as a positive thing. It
doesn't have to be however; but we define the
backwards-difference formula (obviously closely related)
as
which comes out of
which we can again attack using Taylor series polynomials. What is the order of accuracy of this method?
We can also derive it as an average of the two lop-sided schemes! We might then average their error terms to get an error term. However, what do you notice about that error term?
Let's think about computing with error.
It will turn out that the truncation error of a scheme like these will look like
where is a bound on the derivative of interest on the interval
of interest (second in this case), and
is a bound
on the size of a truncation or round-off error.