Last time: | Next time: |
Today:
I may be making a few references to that page as we go along. Let's start with this one (about what we're actually solving, or attempting to solve).
The upshot is expressed well in the authors' quote that "...the numerical solution values $y_{n-1}$ and $y_n$ do not lie on the true solution. In fact, they do not even lie on the same solution."
Start by taking a peak at Figure 8.13, p. 325. It illustrates everything that we want to consider.
Our author distinguishes three types of error: in this figure there are two,
I want to prove a result which gives a bound on the error we're making (ignoring rounding error).
Compare this to the result at the bottom of p. 326. They claim that the global discretization error is proportional to $(b-a)h$. (We're in the ballpark!)
\[ |y(t_i)-u_i| \le \frac{1}{L}\left(\frac{hM}{2}+\frac{\delta}{h}\right) \left[e^{L(ih)}-1\right] + |\delta_0|e^{L(ih)} \]
The neat thing is that we can arrive at an optimal $h$ to reduce the error at the end of interval, when $i=N$, to below some given small error target $\epsilon$:
\[ |y(b)-u_N| \le \frac{1}{L}\left(\frac{hM}{2}+\frac{\delta}{h}\right) \left[e^{L(b-a)}-1\right] + |\delta_0|e^{L(b-a)} < \epsilon \]
provided we can bound the second derivative and the rounding errors:
Questions:
In particular, we generalize the formula on p. 323,
to the more general formula
Note the parameter $h$ in the $\phi$ function: $\phi$ will generally be a function of $h$. We think of it as an improved derivative calculation -- but it's just based on information at the $n^{th}$ time step. So these are "first-order" methods -- they take into account only the preceeding time step.
And so on! That's the idea of the higher-order Taylor methods.
The name "Taylor-2" suggests that we can simply derive these methods from Taylor's theorem, and that's what I'd like to show now.
Let's have a look at a derivation with a slightly different emphasis from that in our text (our authors want to avoid partial derivatives, but I don't!:).
Examples:
This code is not optimized for approximation, but shows the dependence of each success step on $h$ (values are defined recursively).
If you increase $n$ you will eventually exceed the recursion limit of Mathematica.
A small change in how the $w$ values are computed makes it efficient for calculation.