Last time | Next time |
Today:
Theorem 3 is just a corollary of Theorem 2, where the integrals are the obvious power functions:
Let , where f is a positive, decreasing function. If converges by the integral test, and we define the remainder by , then
(this gives us a bound on the error we're making in the calculation of a series). This is useful, for example, in the calculation of digits of (now, you might ask "and what's the use of that?!";).
We can use this remainder inequality to get a bound on the true value of the series, just by adding to each part of the inequality:
This theorem says that, in the long run, one series has terms which are simply a constant times the terms in the other series.
In "the long run" means that we only really need to worry about the "tails" of series: we can throw away any finite number of terms for issues of convergence.
That being said, a theorem below does give us some help in determining the value of (or at least bounds on the value of) the series of a type called an "alternating" series. In particular, we want to look at alternating series like the one below:
So it turns out that if the terms are converging to 0, then the alternating sequence converges to a limit.
Now we usually write our alternating series in the following way, illustrated by Leibniz's test: we assume that the are positive, and the term takes care of the "alternation". (Our textbook switches to for the positive part of the term.)
Our textbook calls this the "Alternating Series Test" (p. 751). The "Furthermore" part our textbook calls the "Alternating Series Estimation Theorem" (p. 754).
We need to show that the terms of the series satisfy the Leibniz test:
Identify the terms , and decide whether they satisfy the Leibniz test.
Remember that for convergence, the conditions of the test need be true only eventually: convergence is all about the tail, not the head of the sequence.
For this one, we need the "Alternating Series Estimation Theorem" (p. 754):