Overview of Chapters 1 and 2

Abstract:

Your test will resemble the problems from your homework assignments. You will probably have 7 equally weighted questions or so (one every ten minutes!), one of which will involve several true/false questions (like the self-tests at the end of each chapter - answers are at the end of the book).

Section 1.1

We are introduced to statements, logical connectives, and wffs.

An implication is an argument, or theorem, which we may seek to prove. It is false if and only if the hypothesis (antecedent) is true while the conclusion (consequent) is false. The truth table for this logical connective is very important for understanding much of what follows!

Truth tables can prove tautologies (statements which are always true).

TautologyTest can prove tautologies of the form tex2html_wrap_inline170 , which it does by contradiction: assume both P and Q', and then break down each until all statement letters have truth values. If a statement letter is both true and false (a contradiction) then tex2html_wrap_inline176 is false, and the implication is true - a tautology.

Section 1.2

Propositional logic allows us to test arguments

displaymath178

to see if they're valid (tautologies).

Create a proof sequence using hypothesis or derivation rules (e.g. modus ponens). There are equivalence rules (such as DeMorgan's laws), and inference rules (e.g. modus tollens) which only operate in one direction.

The deduction rule helps us prove implications: the antecedent joins the list of hypotheses, and we simply prove the consequent of the implication.

One seemingly difficult task is converting English arguments into wffs.

Section 1.3

We add a variable to statements to create predicate wffs. We then consider statements like ``for all integers....'', or ``there is an integer such that....'': that is, we quantify the predicate, using tex2html_wrap_inline180 and tex2html_wrap_inline182 .

By introducing a variable we require a domain, called the domain of interpretation (non-empty).

Quantifiers have a scope, which indicates the part of a wff to which the quantifier applies.

Once again, translating English arguments into wffs is one of the tough challenges.

Section 1.4

We use predicate logic to prove predicate wffs, including new rules such as instantiation and generalization (as well as all the old familiar propositional logic rules). Table 1.17 outlines limitations on use.

Big Idea: strip off the quantifiers, use derivation rules on the wffs, and put quantifiers back on as necessary.

A few rules of thumb:

Section 1.5

Prolog is introduced, a declarative rather than procedural language.

Prolog facts and ``rules'' - both of which are wffs - are read into a database of information. Then we can begin to prove (or disprove) theorems - arguments - of the form

displaymath178

by turning such arguments into Horn clauses:

displaymath191

or

displaymath192

Then proofs proceed by disjunctive syllogism: P, tex2html_wrap_inline198 implies Q. Prolog sifts through its database in the order in which it was entered, testing all cases.

Recursive definitions pop up, represented by two rules: a base case, and an inductive case. Prolog can easily fall into infinite loops by its ``depth-first'' strategy, and its lack of a ``memory''.

Section 1.6

Program verification occurs by testing, and by proof of correctness. We considered two pieces of code: assignments, and conditional statements. In both cases, a pre-condition and post-condition surround the code in what's known as a Hoare triple:

displaymath202

(think assertions in C++ for Q and R).

Proving the code correct means showing that

displaymath203

(that this implication is true).

In each case, we receive rules for these statement types which guarantee correctness. For example, the assignment rule: if

then the Hoare triple is valid.

Section 2.1

We look at a variety of proof techniques, including exhaustion, by contradiction, by contraposition, direct; and one ``disproof'' technique: counterexample.

   table73
Table: Summary of useful proof techniques, from Gersting, p. 91.

Section 2.2

Induction is a proof technique which is useful for demonstrating a property for an infinite ladder of propositions (think of our property as being indexed by n, as in P(n). Induction begins with a base case (or an anchor) and then proceeds via an inductive case (often tex2html_wrap_inline244 ).

There are two different (but equivalent) principles of induction, the first and second. The second appears to assume more than the first: the inductive hypothesis in the second principle is that the property is true for all cases up to and including the tex2html_wrap_inline246 case.

Both principles of induction are equivalent to the principle of well-ordering, which asserts that every non-empty set of positive integers has a smallest element.

Section 2.3

In section 2.3 we studied the proof of correctness rule for a loop, which requires an inductive proof: the idea is that if we have a Hoare triple of the form tex2html_wrap_inline250 , where s is a loop of the form

displaymath248

then from tex2html_wrap_inline254 we can derive tex2html_wrap_inline250 .

The trick is often how to find Q, the loop invariant. It should be true before, during, and after execution of the loop. We need to ask ourselves: what essential property will be true at these three stages of the loop?

Section 2.4

Recursion in section 2.4 looks very much like induction: the idea is that we have a base case (or cases), and from there we generate additional cases. Unlike induction, the set of things we generate may not be easily indexed to the integers. For example,

In this section we see how to solve one particular recurrence relation, the linear, first-order, constant-coefficient recurrence relation. (This was the big, ugly proof from last time, which Alan grew tremendously impatient with!;)

Once we have this formula, we needn't ever solve another linear, first-order, constant-coefficient recurrence relation from scratch: we can just invoke the formula. This is our quest, the holy grail!

Section 2.5

In the analysis of algorithms we are interested in efficiency, and will count operations in order to compare competing algorithms. We can sometimes count operations directly, but may resort to recursion to count.

A different variety of recurrence relation occurs in the analysis of algorithms, when we consider ``divide and conquer'' algorithms (such as BinarySearch).

By changing variables, we can get a closed form solution for the number of operations for these ``divide and conquer'' algorithms.



LONG ANDREW E
Thu Feb 14 21:05:44 EST 2002