Last time: | Next time: |
Today:
Hopefully you received my email about my own version of the global adaptive quadrature scheme.
Let's talk about how it's gone, and any discoveries we've made about the two different approaches, Mathematica, tips for tricky functions (without specifics, please), etc.
Your projects are due by Midnight tonight! And remember to polish them up a bit....
\[ U(t_{n+1}) = U(t_{n})+ h U'(t_{n}) + \frac{h^2}{2}U''(t_{n})+ \frac{h^3}{3!}U'''(\xi_{n}) \]
eliminate the derivatives, by using clever choices of intermediate observation, which we can think of as being obtained by simpler methods (e.g. trusting Euler only half way...). We can use the multivariate Taylor series expansion to both obtain these methods, and verify that they're good. So last time we saw that Runge's midpoint method is Taylor-2, up to the quadratic term (at which point their error terms are going to differ, but be of similar order in $h$).
Euler's | |
Taylor-2 | |
Runge Midpoint | |
Runga Trapezoidal | 1.583 |
RK-4 |
By the way, evidently Exp is implemented in Mathematica via its Taylor series expansion. So I'm using a cannon to kill a fly -- which sounds pretty merciless....:)
These are among the things we learned to do in this course.
(This is my favorite figure of the text, if you haven't guessed!)
One problem is that computers speak base 2, and we speak base 10; and only rarely do these both end up being machine numbers. So one or both of us is in error.
Rounding or chopping occurs, and we need to be careful in how we do that. Rounding-to-even is a way of avoiding bias in our calculations.
We've seen that, even if our division button isn't working, we can use Newton's method or bisection to program that!
It turned out that Taylor series were useful tools in many places across the semester. One thing that Taylor's theorem does is allow us to bound errors.
Errors also compete: one of my favorite parts of the course is discovering that the step-size for integration should be made small, but only so small: if you make it too small, the round-off errors from all the additions you do start to add up and overwhelm the advantage achieved by reducing the truncation error.
There's an optimal value, given the form of and the machine you're using.
Sometimes errors can even cooperate! This is the lesson of Simpson's rule: wherever you have two estimates, you have a third; and if you're clever, you can add them so that their errors cancel:
Newton's method is replaced by secant (and then Muller's), or
Higher-order Taylor methods for ODEs are replaced by RK methods.
and hope that we have improved.
Newton's method is an example of one of these "fixed point" schemes, where you know you're done when
We have to listen to our functions! (Did you ever think that you'd be listening to your functions?) Sometimes they'll tell you lies, however; and sometimes you'll leap to false conclusions. "Beware the Jabberwock, my son (and daughter)!"
Best of luck to you all!
It's not quite what you're asked to do in this algorithm, however, since you're to keep track of the errors, and keep poking at the largest: