Last time: | Next time: |
Today:
I'm struggling with what to do -- seems like an honest mistake, and yet....
What do you think?
Here's what I propose: those of you who took "the easy way out" will do an additional homework problem: Carry out problem #1, only replace the thing to compute by .
Get that to me by Tuesday after break. I'll read off a list of names, and if you're on it, you owe me. Let me know if I've falsely accused you!
That's the interesting part! That's the analysis.
The very boring part is doing bisections, which is why we spent the time in class working up a very nice bisection algorithm for Mathematica -- why not use it? I can do a quadrillion bisections before breakfast....
For example, notice the Fibonacci numbers popping out in Ben's Mathematica code.
Notice how quickly it converges (try instead of in Ben's example).
With the code in hand, you can experiment; you can play. And it's in play that you really learn things, not in drudgery.
That being said, it really does pay to do a few iterations of any iterative scheme by hand. That, too, can be instructive. My question to you thus becomes, how many iterations by hand is enough before you'll head for the computer?
Defeng agreed, and mentioned "monotonically increasing" on each interval, which was great -- but that's not enough. Why not?
The answer is this: "Generally? Yes; always? No."
Newton's method can be fooled, and I'll illustrate two ways using an example.
That discussion leads us to
Let's think about the difference in the context of a dataset of points, .
Our authors begin by distinguishing between extrapolation and interpolation. What is the difference, in their minds?
You might notice that Muller's method is discussed in the beginning of this section. (I hope that you noticed!)
The authors make a rather bold claim: that "it might be fair to say that interpolation is the most important topic in this book...."
While classes of interpolation functions -- interpolants -- are described (e.g. polynomials, sines and cosines -- Fourier series --, exponentials, rational functions), our authors end by saying that we'll focus on polynomials, and we start with Taylor series polynomials.
You have to be careful that you're not deceived by the form of the expression to the right. Every function is not polynomial -- that's not what it's saying. All the interesting stuff is buried in that Greek letter "xi".
An important observation is made about the computation of these polynomials. It is the introduction of Horner's rule (or method).
This is generally how all polynomials should be evaluated.
Even though the order of convergence is less, the routine may run in about the same time as Newton's (depends on the problem, of course).