- The primary fact of life is that calculators are almost always
wrong; but they're so close that we don't care. "All models are wrong;
some models are useful."
- We're going to make errors: there are those we can control, and
those we can't. We should do everything to minimize those we can
control, and to remain mindful for those we can't control.
- What does your calculator do? It
- performs the basic arithmetic operations,
- graphs functions,
- computes derivatives,
- computes integrals,
- solves equations (e.g. finds roots), and
- may even solve ODEs.
These are among the things we learned to do in this course.
- An approximation without an estimate of the error made is a poor
thing.
- Start with representation of numbers. Numbers in the computer are
approximations, not exact. Very few (relatively) of the reals are in
there (and in this picture we can see what kind of error we're making
in representing any real number):
One problem is that computers speak base 2, and we speak base 10; and
only rarely do these both end up being machine numbers. So one or both
of us is in error.
Rounding or chopping occurs, and we need to be careful in how we do
that.
- Algorithms for even some of the most common operations you've
always known may not be ideal ("ill-conditioned", or inefficient).
- Quadratic formula
- Evaluation of a polynomial should be carried out via Horner's method.
- Representation of function is similar: sine is not in your
calculator; a Taylor series representation of sine is. This is so important because we replace complicated
functions with the operations of addition, subtraction, multiplication,
and division -- those operations which we can program computers to do.
It turned out that Taylor series were useful tools in many places
across the semester. One thing that Taylor's theorem does is allow us
to bound errors.
Errors also compete: one of my favorite parts of the course is
discovering that the step-size for integration should be made small,
but only so small: if you make it too small, the round-off errors from
all the additions you do start to add up and overwhelm the advantage
achieved by reducing the truncation error.
There's an optimal value, given the form of and the machine you're using.
Sometimes errors can even cooperate! This is the lesson of Simpson's
rule: wherever you have two estimates, you have a third; and if you're
clever, you can add them so that their errors cancel:
- We can use polynomials for capture the behavior of a series of
points (splines), a function (Taylor), or even for design (e.g. Bezier).
- The most important polynomial is the linear. We base a lot of
methods off of the tangent line, for example:
- Newton's method for root-finding
- Euler's method for solving ODEs.
Each of these methods can be improved by using higher-order
Taylor approximations: e.g. rather than a tangent line, use a
"tangent quadratic", which gets the slope and the curvatuve
right at the point.
- Methods involving the computation of derivatives may be improved
(or faked) by replacing analytic derivatives by numerical
versions. We perform a rather precise balancing act, similar to what
happens in Simpson's rule. And so
Newton's method is replaced by secant (and then Muller's), or
Higher-order Taylor methods for ODEs are replaced by RK methods.
- Each of these is an iterative scheme: we compute
and hope that we have improved.
Newton's method is an example of one of these "fixed point"
schemes, where you know you're done when
- Some schemes are recursive, e.g. adaptive quadrature. Our adaptive
quadrature code was also useful for intelligent choice of points for
plotting a function (the points being joined by a spline to make a nice
smooth curve).