Last time: | Next time: |
Today:
Perhaps I'm so enthralled with Muller's method because I recall a line from my first numerical analysis textbook (Conte and de Boor), in which they had this to say about Muller's:
"A method of recent vintage, expounded by Muller, has been used on computers with remarkable success. This method may be used to find any prescribed number of zeros, real or complex, of an arbitrary function. The method is iterative, converges almost quadratically in the vicinity of a root, does not require the evaluation of the derivative of the function, and obtains both real and complex roots even when these roots are not simple."
Now the third edition of C&deB was from 1980, but the first edition was written in 1965 -- before we landed on the moon -- so it might be that they meant by "recent vintage" something around that time. In fact, Muller wrote this up in 1956! Perhaps the use of "recent vintage" in textbooks is a mistake....:)
Question: How would you write the Lagrange interpolator to two points?
In what sense is a Taylor series polynomial an interpolant? Ordinarily we've been thinking of it as an approximation to a function.
On the other hand, we are often interpolating data (but just not points, as in the case of Muller, but rather points and slopes and higher derivatives at a single point.
So we might interpolate the point (0,0), with slope 0, and second derivative 0, by the quadratic function $f(x)=x^2$.
This function fits all that data exactly (and passes through exactly one known point). This is the Taylor series polynomial of degree 2 subject to those constraints at $x=0$.
A tangent line is the Taylor series polynomial of degree 1, which fits a point and gets the slope right.
You have to be careful that you're not deceived by the form of the expression to the right. Every function is not polynomial -- that's not what it's saying. All the interesting stuff is buried in that Greek letter $\xi$.
An important observation is made about the computation of these polynomials. It is the introduction of Horner's rule (or method).
This is generally how all polynomials should be evaluated.
A hint for part b: logs of products
Even though the order of convergence is less, the routine may run in about the same time as Newton's (depends on the problem, of course).
Notice the increasing powers of $x$. Clearly there's a risk that the coefficients will be of vastly different magnitudes, which could cause severe roundoff errors. This is one of the problems with this formulation.
Many of you may have used this notion to find the coefficients of your Muller quadratic. The problem is one of solving three equations in three unknowns; and since the equations are linear, we have a linear system to solve. More generally:
We'll finish this off by having a look at the relationship between the divided differences and the derivatives of $f$.
n = 4 results = Bisection[f, n*Pi + Pi/4.0 - .1, n*Pi + Pi/4.0 + .1] n = 3 results = Bisection[f, n*Pi + Pi/4.0 - .1, n*Pi + Pi/4.0 + .1] n = 30 results = Bisection[f, n*Pi + Pi/4.0 - .1, n*Pi + Pi/4.0 + .1]