Newton's Method
Summary
So... you may be wondering what the big deal is about Fixed Point Iteration (FPI): I mean, there are all these different functions that you can use, but we don't seem to know how to guarantee that one is good and another bad: some work, others are disasters. If only we could find a function that we could be sure of....
That danged Newton! He seemed to find all the cool stuff. He did it this time, once again. It wasn't enough that he discovered the law of Universal Gravitation, the theory of colors, integral and differential calculus, the binomial theorem, Newton's law of cooling, the solution to the brachistochrone problem: he had to discover Newton's method too.... Who better, however? What's the chance that the discover would have the name Newton too?;)
Make sure that you're at this year's Sehnert lecture, where Fred Rickey will talk about Newton, the man: October 24th. Math majors and minors should be at the banquet beforehand. Watch for the sign-up sheet in the department.
The method is best approached from the direction of our old friend, the Taylor series expansion, and the root problem (the dual problem of the FP problem): write
or
When x=p, f(p)=0: so, starting from , perhaps a better estimate of p will be obtained by solving the the following equation for such that :
or
and, more generally,
(provided ). This is our scheme, illustrated in Figure 2.7, and encapsulated in the fixed point function
Notice that (provided ).
The really neat thing about this FPI function g is that g'(p)=0, which means that convergence of the FPI scheme will be quite fast:
If f(p)=0, then
This is the feature that makes Newton's method so very grand and wonderful, and which justifies our interest in the method.
In fact, Newton's method is said to converge quadratically, by contrast with FPI which is generally said to converge linearly:
or
so that
when gets into close proximity to p. When will we be assured of ``contracting''? When
Obviously, as long as is bounded, there is a neighborhood of p in which this will happen.
Problems in paradise....
What makes it not quite as good as sliced bread?
One cheap fix for the derivative problem is to use the discrete approximation to the derivative: that is,
Hence, the iteration function becomes
which is the heart of the secant method, which requires two approximations to start ( and ), and doesn't converge quadratically (although it does converge super-linearly, with exponent . Hence, it is still better than bisection in terms of convergence; it may, however, fail to bracket a root (it will lose track of the interval on which a solution exists...).
The method of False Position (Regula Falsi) is just a modified bisection, that should approach the root faster (because it uses secant lines rather than midpoints), but has the unfortunate property that it doesn't necessarily produce a diminishing interval squeezing the root (as illustrated in Figure 2.10). While an interesting twist on the bisection method, it is not recommended.