Last time: | Next time: |
Today:
First of all, show your work. Also leave space between problems. Some of your solutions are all crammed together. It makes it more difficult for the grader to follow, and never forget the first rule of homework: make it easy on the grader. You want your grader in a good mood!
A serious attempt is worth most of the points. I hate to see "failure to start".
A few of you gave answers that failed because
Victoria presented her solution in a nice way, as did Ari.
Vicheth provided a family of answers that I want to look at....
(a) "Prove" is a high bar. I should read your work and be convinced that this implementation will return for all real numbers a and b.
Vicheth has a nice proof, which I hope he will show.
(b) "motivate geometrically" is a lower bar: but hopefully by examining your drawing I will see why it must work.
Question: How would you say this implementation in words? "The max of two real numbers is....". Ben took a stab at it. What do you have, Ben?
Question: Having obtained the max in words, how do we describe the min?
Question: Can we say "Without Loss of Generality" (WLOG), and assume that ?
It appears that we have a triangle inequality deficit! It's usually visualized in terms of vectors, e.g.
but it also works for real numbers:
and there's a "reverse triangle inequality:
Put them together and you get this important result:
So how can the triangle inequality to demonstrate the stated result? (Since that was the hint, it seems a good place to start!).
Andres used both inequalities successfully.
One can also use the single triangle inequality and one of my favorite tricks in the book: adding the appropriate form of zero, and Eden can illustrate that.
As an alternative, you could do case analysis (a=b). Drew did a nice job along these lines. But the cases require assumptions about the signs of a and b as well, and it's a little tedious. (I'm sure that Drew would agree!)
There were some illegal moves. For example, some folks just dropped absolute values. Or made up false properties, (e.g. "the absolute value of a sum is the sum of the absolute values" -- nope!)
Guesswork and intuition are very, very important; but ultimately, we need to justify assertions (e.g. that the function will achieve its maximum size on an endpoint).
Let's see how to do a little analysis using that triangle inequality on this problem....
I will make you do a little more work for that, however. See the assignment page.
"The main idea is that the computer works with a finite subset of the reals known as machine numbers." (p. 33)
I might have started this chapter with Figure 2.3, on page 41:
It gives a picture of machine numbers on a "toy binary computer". Section 2.1 makes a point about the need to consider other bases (other than 10). Among these other bases, the base 2 is probably the most important.
Question: why does base 2 figure so prominently in computer science?
There's a beautiful example here of how base 3 is used for tagging hogs:
Questions:
I've asked you to write a base converter for homework. Do you know how to convert from a one base to another?
Let's try a few. What's
In this section the authors describe the manner in which numbers are stored in the computer. They focus on "floating-point numbers", which are represented by three parts:
Definition 2.1: A real number is said to be an n-digit number if it can be expressed as
Question: They then ask "What's an n-bit number?" (p. 39) What do you tell them?
Let's imagine that our machine has base-10 architecture, with , and . Then we know exactly which numbers may be represented: numbers from
Largest magnitude numbers | -9.999x109 | +9.999x109 |
Smallest magnitude numbers | -1.000x10-9 | +1.000x10-9 |
Failure to include the denormalized numbers (that don't have a leading 1) leads to a gap around zero in this figure:
On the downside, if we allowed non-zero leading digits, then there would be redundant representations for many numbers (e.g. +1.000x10-9=+0.100x10-8)
Question: what do all the machine numbers look like if we restrict a machine so that it has architecture , and ?
Now, in reality, computations are usually done in base 2, and the IEEE standard for single precision and double precision are
Single | Double | |
Base | 2 | 2 |
n | 24 | 53 |
e | [-126:127] | [-1022:1023] |
Question: in each case, how many exponents are there in the exponent range? Of what significance is that number?
Question: let's see if we can make sense of this system with this particular example:
Our authors describe the difference between precision and accuracy at this point: I think that it's best done graphically:
"The purpose of rounding in computation is to turn any real number into a machine number, preferably the nearest one." (p. 43)
But there are different ways to do it. You're no doubt familiar with rounding (but how do you handle ties -- that is, how do we round 19.5 to an integer?)). The authors suggest several strategies (p. 43):
"Round-to-even" because if nth digit is even, do nothing; add 1 if odd, making it even. All nth digits become even.
Same as Rule 1, except when exactly equal to 500000....
then round UP (away from zero).
Inferior to Rule 1, as ever-so-slightly biased away from zero.
Whatever comes after nth, just drop it.
Inferior to Rule 1, as slightly biased toward zero.
The biases are illustrated nicely in Figure 2.5, p. 45:
The chopping approximations are all under-estimates; the rounding methods give more balanced overs and unders; but the rounding alone has a bias for giving overestimates, which rounding-to-even balances out.
There is some vocabulary here with which we should be familiar: sometimes rounding results in
Definitions:
The authors make the case that
where
By "basic operations" the authors mean using standard arithmetic operations on machine numbers to produce machine numbers. There will be errors.
Let a and b be machine numbers, and let represent any of the standard arithmetic operations. Then
I.e., to compute one of these binary operations with machine numbers, you do the operation exactly, and then convert it to a machine number with float (fl). We already know what this will cost: we'll have a roundoff error of .
As we compute more complicated function, however, with one unary or binary operation after another, the errors continue to accumulate (as seen for example, in section 2.4.5, p. 54).
Examples:
(a) 0.6668+0.3334 | (b) 1000.-0.05001 |
(c) 2.000*0.6667 | (d) 25.00/16.00 |
This section features several interesting examples of functions, some of them tremendously important, which are also extraordinarily sensitive to errors.
One of the main points of the section is that a "solution" to a problem may be technically correct, analytically correct, and yet poorly designed to produce good results in general.
An excellent example is the quadratic formula. Many of you have memorized it as
We can imagine situations, however, for which this calculation may be dangerous. What do you notice?
Question: an old trick from your past may be used to improve things: can you think of how to change this formula, to make it less sensitive?
If not, and we must use 1.4, then another user would assume that we only know the 4 in the tenths place to within five units, and you can see what happens to the actual uncertainty -- it expands grossly, to the interval (0.9,1.9).
The speed of light in a vacuum.
When
and, in particular, when , i.e. when
Symmetrically, subtraction will suffer the problem when a and b are approximately equal.