Overview of Chapters 1 and 2

Abstract:

Your test will resemble the problems from your homework assignments. You will probably have 8 short and equally weighted questions or so (one every six minutes!), one of which will involve several true/false questions (like the self-tests at the end of each chapter - answers are at the end of the book).

Section 1.1

We are introduced to statements, logical connectives, and wffs.

An implication is an argument, or theorem, which we may seek to prove. It is false if and only if the hypothesis (antecedent) is true while the conclusion (consequent) is false. The truth table for this logical connective is very important for understanding much of what follows!

Truth tables can prove tautologies (statements which are always true).

TautologyTest can prove tautologies of the form tex2html_wrap_inline216 , which it does by contradiction: assume both P and Q', and then break down each until all statement letters have truth values. If a statement letter is both true and false (a contradiction) then tex2html_wrap_inline222 is false, and the implication is true - a tautology.

Section 1.2

Propositional logic allows us to test arguments

displaymath224

to see if they're valid (tautologies).

Create a proof sequence using hypothesis or derivation rules (e.g. modus ponens). There are equivalence rules (such as DeMorgan's laws), and then there are inference rules (e.g. modus tollens) which only operate in one direction.

The deduction rule helps us prove implications: the antecedent joins the list of hypotheses, and we simply prove the consequent of the implication.

One seemingly difficult task is converting English arguments into wffs.

Section 1.3

We add a variable to statements to create predicate wffs. We then consider statements like ``for all integers....'', or ``there is an integer such that....'': that is, we quantify the predicate, using tex2html_wrap_inline226 and tex2html_wrap_inline228 .

By introducing a variable we require a domain, called the domain of interpretation (non-empty).

Quantifiers have a scope, which indicates the part of a wff to which the quantifier applies.

Once again, translating English arguments into wffs is one of the tough challenges.

Section 1.4

We use predicate logic to prove predicate wffs, including new rules such as instantiation and generalization (as well as all the old familiar propositional logic rules). Table 1.17 outlines limitations on use.

Big Idea: strip off the quantifiers, use derivation rules on the wffs, and put quantifiers back on as necessary.

A few rules of thumb:

Section 1.5

Prolog is introduced, a declarative rather than procedural language.

Prolog facts and ``rules'' - both of which are wffs - are read into a database of information. Then we can begin to prove (or disprove) theorems - arguments - of the form

displaymath224

by turning such arguments into Horn clauses:

displaymath235

or

displaymath236

Then proofs proceed by disjunctive syllogism (and exhaustion): P, tex2html_wrap_inline242 implies Q. Prolog sifts through its database in the order in which it was entered, testing all cases (fortunately there are only a finite number of them!).

Recursive definitions pop up, represented by two rules: a base case, and an inductive case. Prolog can easily fall into infinite loops by its ``depth-first'' strategy, and its lack of a ``memory''.

Section 2.1

We look at a variety of proof techniques, including exhaustion, by contradiction, by contraposition, direct; and one ``disproof'' technique: counterexample.

   table69
Table: Summary of useful proof techniques, from Gersting, p. 91.

Section 2.2

Induction is a proof technique which is useful for demonstrating a property for an infinite ladder of propositions (think of our property as being indexed by n, as in P(n). Induction begins with a base case (or an anchor) and then proceeds via an inductive case (often tex2html_wrap_inline266 ).

There are two different (but equivalent) principles of induction, the first and second. The second appears to assume more than the first: the inductive hypothesis in the second principle is that the property is true for all cases up to and including the tex2html_wrap_inline268 case.

Section 2.4

Recursion in section 2.4 looks very much like induction: the idea is that we have a base case (or cases), and from there we generate additional cases. Unlike induction, the set of things we generate may not be easily indexed to the integers. For example,

In this section we see how to solve one particular recurrence relation, the linear, first-order, constant-coefficient recurrence relation.

Once we have this formula, we needn't ever solve another linear, first-order, constant-coefficient recurrence relation from scratch: we can just invoke the formula. But that means that you'll have to memorize it. Alternatively, you can use the ``expand, guess, and check'' method that our author proposes.

Section 2.5

In the analysis of algorithms we are interested in efficiency, and count operations in order to compare competing algorithms. We can sometimes count operations directly, but may have to resort to recursion to count.

A different variety of recurrence relation occurs in the analysis of algorithms, when we consider ``divide and conquer'' algorithms (such as BinarySearch).

By changing variables, we can get a closed form solution for the number of operations for these ``divide and conquer'' algorithms.



LONG ANDREW E
Wed Oct 2 12:56:44 EDT 2002