"check-up" example: finding an eigenvector, in the 3x3 case (check
results of p. 114)
Markov Chains: a system moves between states according to transition
probabilities (but is always in one of the states). We model this problem with
Markov chains.
Three properties (p. 122):
probability of moving from state i to state
j is independent of what happened before moving
to state j;
conservation: sum of probabilities out of a state is 1
X(t) is the probability distribution vector which
describes the probability that the system is in each
state at time t. X(t+1)=TX(t).
The transition matrix T is a matrix of probabilities, whose
column sums are 1.
Proof that, given an initial vector X(0) of probabilities
summing to 1, X(t) will always sum to 1.
Regular matrices and the search for steady state solutions
(p. 124). Example illustrating that steady state solutions need
not exist:
[0 1]
A= [ ]
[1 0]
Example: Markovian squirrels in Scotland (Project 3.4, p. 141).
Modelling step - computing the transition probabilities
Problems of "inappropriate rounding" (p. 128)
Website maintained by Andy Long.
Comments appreciated.