MAT360 Section Summary: 2.3

Newton's Method

Summary

So... you may be wondering what the big deal is about Fixed Point Iteration (FPI): I mean, there are all these different functions that you can use, but we don't seem to know how to guarantee that one is good and another bad: some work, others are disasters. If only we could find a function that we could be sure of....

That danged Newton! He seemed to find all the cool stuff. He did it this time, once again. It wasn't enough that he discovered the law of Universal Gravitation, the theory of colors, integral and differential calculus, the binomial theorem, Newton's law of cooling, the solution to the brachistochrone problem: he had to discover Newton's method too.... Who better, however? What's the chance that the discover would have the name Newton too?;)

Make sure that you're at this year's Sehnert lecture, where Fred Rickey will talk about Newton, the man: October 24th. Math majors and minors should be at the banquet beforehand. Watch for the sign-up sheet in the department.

The method is best approached from the direction of our old friend, the Taylor series expansion, and the root problem (the dual problem of the FP problem): write

displaymath207

or

displaymath208

When x=p, f(p)=0: so, starting from tex2html_wrap_inline239 , perhaps a better estimate of p will be obtained by solving the the following equation for tex2html_wrap_inline243 such that tex2html_wrap_inline245 :

displaymath209

or

displaymath210

and, more generally,

displaymath211

(provided tex2html_wrap_inline247 ). This is our scheme, illustrated in Figure 2.7, and encapsulated in the fixed point function

displaymath212

Notice that tex2html_wrap_inline249 (provided tex2html_wrap_inline251 ).

The really neat thing about this FPI function g is that g'(p)=0, which means that convergence of the FPI scheme will be quite fast:

displaymath213

If f(p)=0, then

displaymath214

This is the feature that makes Newton's method so very grand and wonderful, and which justifies our interest in the method.

In fact, Newton's method is said to converge quadratically, by contrast with FPI which is generally said to converge linearly:

displaymath215

or

displaymath216

so that

displaymath217

when tex2html_wrap_inline259 gets into close proximity to p. When will we be assured of ``contracting''? When

displaymath218

Obviously, as long as tex2html_wrap_inline263 is bounded, there is a neighborhood of p in which this will happen.

Problems in paradise....

What makes it not quite as good as sliced bread?

One cheap fix for the derivative problem is to use the discrete approximation to the derivative: that is,

displaymath219

Hence, the iteration function becomes

displaymath220

which is the heart of the secant method, which requires two approximations to start ( tex2html_wrap_inline239 and tex2html_wrap_inline243 ), and doesn't converge quadratically (although it does converge super-linearly, with exponent tex2html_wrap_inline271 . Hence, it is still better than bisection in terms of convergence; it may, however, fail to bracket a root (it will lose track of the interval on which a solution exists...).

The method of False Position (Regula Falsi) is just a modified bisection, that should approach the root faster (because it uses secant lines rather than midpoints), but has the unfortunate property that it doesn't necessarily produce a diminishing interval squeezing the root (as illustrated in Figure 2.10). While an interesting twist on the bisection method, it is not recommended.




Sun Sep 18 21:21:35 EDT 2005