Last time | Next time |
Today:
When you randomly choose problems, this occasionally occurs: all the problems were "the same", which doesn't seem right. On the other, it allows me to recognize a really important issue. In this case, it's your apparent lack of willingness to use theorems and properties of functions.
The property you need to use is this one:
The theorem you need is this one:
But if you look at it from another perspective, this sequence is monotonically decreasing and bounded below -- hence it must be convergent.
Say this:
An infinite number of mathematicians walk into a bar. The first one orders a beer. The second orders half a beer. The third order a quarter of a beer. The fourth orders an eighth, and so on.
The bartender says "You're all idiots", and pours two beers.
We should get
Such a sum is called an infinite series.
we define a new sequence :
These are called partial sums. Notice that there are two indices running around, and that the limit refers to the summation's upper limit, not to the index of the sequence.
If
Then we say that
exists, and is equal to . This justifies our probability distributions of the form
Because, in general,
This is an astonishing fact: we can think of an exponential function as a polynomial with an infinite number of terms. In fact, we can think of lots of functions this way (e.g. sines and cosines!). This is where we're headed in this class....
More formally,
Now geometric series are sufficiently important that it's useful to include that special case:
There's a really fun proof of the first part above, based on a trick. Let's have a look at it, when :
Multiply both sides by r, subtract, and solve for S.
This theorem tells us that if a sequence isn't asymptotic to the x-axis, if the terms don't tend to 0, we can forget about its sequence of partial sums converging.
Finally, series behave the way we'd hope:
Theorem 3 is just a corollary of Theorem 2, where the integrals are the obvious power functions:
Let , where f is a positive, decreasing function. If converges by the integral test, and we define the remainder by , then
(this gives us a bound on the error we're making in the calculation of a series). This is useful, for example, in the calculation of digits of (now, you might ask "and what's the use of that?!";).
This theorem says that, in the long run, one series has terms which are simply a constant times the terms in the other series.
In "the long run" means that we only really need to worry about the "tails" of series: we can throw away any finite number of terms for issues of convergence.