Error Analysis for Iterative Methods
Summary
Alex asked if there's a good way of getting a handle on the number of terms in Newton's method (in problem #5 of 2.3 he discovered that the answers in the back of the text were given to more accuracy than required). That's the subject of this section.
We learned a bit previously in section 2.2: in 2.2 we obtained useful bounds for fixed-point methods, e.g.
where , and on [a,b], which brackets the fixed point p. You can use this for Newton's method, but perhaps we can do better, since the convergence is better (I've asserted that it's ``quadratic'', rather than linear).
Theorem 2.5 (from section 2.3): Let . If is such that f(p)=0 and , then such that Newton's method generates a sequence converging to p for any initial approximation .
This result is ``obvious'' (I claimed, in 2.2), since
when gets into close proximity (i.e. a -neighborhood) of p. We can be assured of ``contracting'' as long as the magnitude of is bounded (e.g. ) in that neighborhood, so long as
It's obviously true when , and we simply choose to be assured that we'll converge by the Fixed-Point Theorem (2.3).
Definition 2.6: Suppose that is a sequence that converges to p, with for all n. If positive constants and exist with
then the sequence converges to p of order , with asymptotic error constant .
Q: What does asymptotic mean?
Q: Is bisection linearly convergent? Contrast this with Exercise #9, for your homework.
Theorem 2.7: Let be such that . Suppose, in addition, that g' is continuous on (a,b) and a positive constant k<1 exists with
. If , then for any number in [a,b], the sequence of iterates
for converges only linearly to the unique fixed point .
Proof (by the MVT)
Theorem 2.8: Let p be a solution of the equation x=g(x). Suppose that g'(p)=0 and g'' is continuous and strictly bounded by M on an open interval I containing p. Then such that, for , the sequence converges at least quadratically to p. Moreover, for sufficiently large values of n,
(Hence, Newton's method is quadratic.)
Proof (by Taylor series, and Fixed-Point theorem)
Example: Here's where we can make use of the quadratic convergence to address Alex's question: For problem #5b, for example, with
and a solution , we use
and then compute the first and second derivatives of g. We note that by theorem 2.2 there is a unique fixed point in the interval [-3,-2.74]; also we see that g has a maximum value of on the interval [-3,-2.74]. g has a maximum value of <-.27 on the interval, so we could use Equation (1) above to make our estimate (it gives 8 iterations).
We can do better, of course! Here it is in lisp:
Theorem: the secant method is of order the golden mean.
Motivation: #12
It's possible to create methods that are of higher order than Newton's, but one does so at the expense of more constraints on f (e.g. , and greater computational complexity:
Example: #11
Definition 2.9: A solution p of f(x)=0 is a zero of multiplicity m of f if, for , we can write , where .
Theorem 2.10: has a simple zero at , but .
Theorem 2.11: has a zero of multiplicity m at , but .