Section Summary: 8.7

  1. Definitions

  2. Theorems

    Error Bounds

    Upper-bounds might be determined algebraically, estimated graphically, or derived from max/min considerations. Note that K is different in the different formulas.

  3. Properties/Tricks/Hints/Etc.

    Recall the definitions of the Riemann sum: the sum

    displaymath178

    where tex2html_wrap_inline222 , named after Bernhard Riemann (1826-1866), a student of Gauss.

    In general

    1. These approximation methods get better as the subintervals get smaller (that is, as n gets larger).
    2. The errors of the trapezoidal and midpoint methods are oppositive in sign, and differ by a factor of two (midpoint is better, in general).
    3. Simpson's rules requires an even number of subintervals, and can be written as a weighted sum of the trapezoidal and midpoint methods:

      displaymath179

  4. Summary

    There are two major motivations for approximate integration:

    Strategy (in either case): construct a function tex2html_wrap_inline230 which approximates f, or fits the data pairs tex2html_wrap_inline228 , and hope that

    displaymath181

    The choices for tex2html_wrap_inline230 are usually step-functions (Left, Right, and Midpoint Rectangle rules), or continuous but non-differentiable functions (Trapezoidal and Simpson's rule). Other (better!) rules use continuous and smooth functions.

    The trapezoidal rule is simply the average of the left and right rectangle rules, a primitive improvement on them both. It is also equivalent to adding up the area under the trapezoids created by connecting left and right endpoints of the curve and then dropping the ends down to the x-axis.

    The midpoint rule is created on the hope that we can avoid extremes at the left and right endpoints. It is another simple rectangle rule, but evidently generally better than the trapezoidal rule.

    Simpson's rule is derived using ``best-fitting'' parabolas, rather than straight line segments. It is considered a very good method in general.

    All the error bounds rely on having a bound on a higher derivative of the function. This may not be easy to obtain.




Mon Sep 8 19:27:51 EDT 2003