Last time | Next time |
Today:
Homework pp. 93--, #1-5, 6, 8, 11, 13, 14, 16, 22 (use your Muller code), 23
I'm not going to spend time on Section 2.7: it's basically about "bracketing" the iteration, and makes a lot of sense -- but it's not really novel or particularly interesting. Basically we cobble together some ideas (e.g. bisection + Newton), to prevent failure of convergence (which is very important, of course). Once you've got a root in a box, you really don't want to let it out.
That one I've got to let him know about!:)
\[ a_{n}(\cdots a_{n}(a_{n}(a_{n}t+a_{n-1})+a_{n-2})+\cdots+a_{1}) \] becomes \[ a_0+t(a_{1} + t(a_{2}+t(a_{3}+t(a_{n}+\cdots+t(a_{n-1}+t(a_{n}))\cdots)) \]
Let's work through the example on p. 87.
(In the exercises you'll extend this idea to a cubic!)
I'll gripe a little about our author's choice for how he represents the algorithm on pp. 91-92 (he's taking some abuse in this section!:). My feeling is that this would be a good time to introduce divided differences. If you look at the formulas on p. 91, you'll see that we re-use some material -- and any time you re-use material, you should avoid re-calculating.
Divided differences are what we use to approximate derivatives, say: \[ DD(x_1,x_0) \equiv \frac{f(x_1)-f(x_0)}{x_1-x_0} \]
Of course, once you have divided differences, you can take divided differences of them (essentially higher-order derivatives):
\[ DD(x_2,x_1,x_0) \equiv \frac{DD(x_2,x_1)-DD(x_1,x_0)}{x_2-x_0} \]
This allows us to write \[ a=DD(p_1,p_2,p_0)=\frac{DD(p_1,p_2)-DD(p_2,p_0)}{p_0-p_1} \] and \[ b=DD(p_1,p_2)-(p_1-p_2)a \]