If you have homework to submit up to section 2.2,
please do so now.
Section 2.3 homework is due Friday.
Questions on anything?
Let's recall what we did last time:
getting something for nothing? That's what we seem to be
doing in this section. Given a convergent sequence,
linearly convergent, we can accelerate its convergence
using its values.
Key points:
FPI is linear, unless \(f'(\hat{x})=0\) at the fixed
point \(\hat{x}\), and it converges as powers
of \(M\) (a bound
on its derivative at the fixed point). n
FPI is quadratic if \(f'(\hat{x})=0\) at the fixed
point \(\hat{x}\).
Even though FPI may be linear, we can accelerate
it with just a little cleverness (Aitken's)!
Aitken's delta-squared method uses the starting
value and two successive iteratest to generate
an improved estimate. Steffensen's strategy was
then to use that improved estimate as the next
starting value, and then do it again:
\[
\begin{array}{c}
{x_1=f(x_0)}\cr
{x_2=f(x_1)}\cr
{\hat{x}\approx a = x_2 - \frac{(x_2-x_1)^2}{x_0-2x_1+x_2}}\cr
{x_0=a}
\end{array}
\]
and iterate....
Today:
We discuss Newton's method, which we can think of both
graphically, where it makes utter sense, and then as a fixed point
method with a particularly clever choice of iteration function.
Let's start with an example, and then derive the method
graphically. The example illustrates why Newton's method is so
clever: because the iteration function has zero derivative at
the fixed point.
Then we'll see how we can spare ourselves a derivative calculation
at the cost of slower convergence (with \(\alpha = \phi
\approx 1.618\), the golden mean!).