- I'm not going to spend much time on the analytic stuff. You've all
had a course in DEs, so hopefully that part makes sense to you. But you
may be a bit rusty. In that case, you might check out Andy's
"DEs in a Day" page.
Our authors start (p. 316) with
\[
Y'(t)=f(t,Y(t))
\]
with initial condition
\[
Y(a)=\eta
\]
I may be making a few references to that page as we go along. Let's start with this
one (about what we're actually solving, or attempting to solve).
The upshot is expressed well in the authors' quote that "...the
numerical solution values
and
do not lie on the true
solution. In fact, they do not even lie on the same solution."
-
Let's pick up then on p. 321. Euler's method is the simplest, sanest
starting point, and it can be derived in two different ways:
- By integrating (p. 321), or
- As a tangent line approximation (Figure 8.10,
p. 322) -- which, by the way, is the same as using a
Taylor series.
- The method as a recursive algorithm is highlighted on
p. 323 (box equation). Note that the step-size
is also a function of
and can vary. So you
might start thinking about adaptive step-size control right away....
- Let's have a look at the code on p. 323.
- Let's do a few steps of Euler by hand, with the
differential equation $f'(x)=f(x)$, with initial
condition $f(0)=0$. Use five steps, on the interval
[0,1].
- Then we'll flip over to the code and see how it
looks... You'll have to adapt the first example, which
is described on p. 323. Let's have a look at that one
first.
- An example for the non-autonomous equation
- We can solve this one analytically by separation of variables.
- Let's compare some numerical solutions.
- How should you feel about your numerical solution,
if you didn't have the exact solution to compare them
to?
- The obvious question is this: "How should we choose the step
size(s)?"
- Now, as usual, we want to talk about errors (a function of
step-size). Let's face it: we're clearly making errors.
Start by taking a peak at Figure 8.13, p. 325. It illustrates
everything that we want to consider.
Our author distinguishes three types of error: in this figure
there are two,
- local discretization error, and
- global discretization error.
In addition we need to be concerned about rounding error.
I want to prove a result which gives a bound on the error we're
making (ignoring rounding error).
Compare this to the result at the bottom of p. 326. They claim
that the global discretization error is proportional to
. (We're in the ballpark!)
- However Making
h small may not be a good idea if we add in the rounding error.
The neat thing is that we can arrive at an optimal h, provided
we can bound the second derivative and the rounding errors.