Linearly independent sets; bases
Summary
We're accustomed to writing vectors in terms of a set of fixed vectors: for
example, in two-space we write every vector in terms of vectors and
. Each vector has a unique representation in terms of these two
vectors, which is important. This set of two vectors is called a basis
of two-space: it is enough vectors to write each vector of the space in terms
of it, but not so many vectors that there are multiple representations of each
vector. These are the two important properties: spanning the space, and
avoiding any redundency. That is, a basis is the smallest spanning set
possible. It is also the largest set of linearly independent vectors: any more,
and you'd have dependence.
linear independence: An indexed set of vectors
in V is said to be linearly independent if
the vector equation
has only the trivial solution ( ). The set is
linearly dependent if it has a nontrivial solution.
Example: #4, p. 243 (independence)
Theorem 4: An indexed set of two or
more vectors,
, is linearly dependent
some
(j>1) is a linear combination of the preceding vectors
.
Example: #33, p. 245
Basis: Let H be a subspace of a vector space V. An indexed set of
vectors in V is a basis for H if
Example: #4, p. 243 (basis)
Example: #34, p. 245
Example:
The columns of the identity matrix
for a basis, called the
standard basis for
:
In three-space these are simply the vectors ,
, and
.
Theorem 5 (the spanning set theorem): Let
be a set in V, and let
.
Theorem 6: the pivot columns of a matrix A form a basis for Col A.
Turns out that elementary row operations on a matrix do not affect the linear dependence relations among the columns of the matrix. Hence, the reduced matrix has the same independent columns as the original matrix. Make sure to choose the columns of the matrix A, however, rather than the reduced matrix....
Example: #36, p. 245