Last time: | Next time: |
Today:
I'll give you a new assignment on Thursday, but will leave you alone so that you can work on the programming assignment for the moment.
Muller's method is discussed in the beginning of this section. It's an example of an interpolation problem: fitting three points exactly. When we think of interpolation, think of fitting something exactly -- nailing it.
Here's an example of another important problem, where approximation and estimation is more important, perhaps, and which I am working on right now:
These are ice thicknesses taken by multiple submarine probs of ice near the north pole:
I'm attempting to reproduce this result. The probes of the ice are only approximations (contain errors) themselves, so we don't worry about fitting them exactly: we're more interested in finding a simple model to estimate ice over the entire ice field.
The two models the authors have drawn are cubic polynomial fits (to part and all of the data, orange and green respectively).
We can extrapolate from the data by predicting future ice thickness using these models. As it is, we're approximating the average ice thickness for each year. We are not intentionally interpolating any values.
While classes of interpolation functions -- interpolants -- are described (e.g. polynomials, sines and cosines -- Fourier series --, exponentials, rational functions), our authors end by saying that we'll focus on polynomials, and we start with Taylor series polynomials.
In what sense is a Taylor series polynomial an interpolant? Ordinarily we've been thinking of it as an approximation to a function.
On the other hand, we are often interpolating data (but just not points, as in the case of Muller, but rather points and slopes and higher derivatives at a single point.
So we might interpolate the point (0,0), with slope 0, and second derivative 0, by the quadratic function
This function fits all that data exactly (and passes through exactly one known point). This is the Taylor series polynomial of degree 2 subject to those constraints.
A tangent line is the Taylor series polynomial of degree 1, which fits a point and gets the slope right.
You have to be careful that you're not deceived by the form of the expression to the right. Every function is not polynomial -- that's not what it's saying. All the interesting stuff is buried in that Greek letter "xi".
An important observation is made about the computation of these polynomials. It is the introduction of Horner's rule (or method).
This is generally how all polynomials should be evaluated.
Muller's method is a generalization of the Secant method (which is an approximation to Newton's method).
What if we go straight from Newton's method to a quadratic method?
Given function , suppose we know both and .
Given an initial guess to the true root . What quadratic would we use to seek the next approximation?
We might start with the "classic method", and see what conditions the a, b, and c must satisfy. We'll re-derive the Taylor series polynomial of degree 2.
A hint for part b: logs of products
Even though the order of convergence is less, the routine may run in about the same time as Newton's (depends on the problem, of course).
Many of you may have used this notion to find the coefficients of your Muller quadratic. The problem is one of solving three equations in three unknowns; and since the equations are linear, we have a linear system to solve. More generally:
It's related to the concept of divided differences (section 5.6.2, p. 212); and the concept of Horner's rule (just before section 5.6.2).
We'll examine my solution to the homework in Mathematica.