Last time: | Next time: |
Today:
For which we want a solution on [0,1].
Questions:
Before we do that, however, a brief reminder that Euler's method can be used very generally: for a system, and for higher-order ODEs:
\[ |y(t_i)-u_i| \le \frac{1}{L}\left(\frac{hM}{2}+\frac{\delta}{h}\right) \left[e^{L(ih)}-1\right] + |\delta_0|e^{L(ih)} \]
The neat thing is that we can arrive at an optimal $h$ to reduce the error at the end of interval, when $i=N$, to below some given small error target $\epsilon$:
\[ |y(b)-u_N| \le \frac{1}{L}\left(\frac{hM}{2}+\frac{\delta}{h}\right) \left[e^{L(b-a)}-1\right] + |\delta_0|e^{L(b-a)} < \epsilon \]
provided we can bound the second derivative and the rounding errors:
For our machine, we have perhaps $\delta=10^{-17}$, and for this problem we have $M=e$. So our expectation is that $h=\sqrt{\frac{2 \times 10^{-17}}{e}}\approx 2.712E-9$.
In particular, we generalize the formula on p. 323,
to the more general formula
Note the parameter $h$ in the $\phi$ function: $\phi$ will generally be a function of $h$. We think of it as an improved derivative calculation -- but it's just based on information at the $n^{th}$ time step. So these are "first-order" methods -- they take into account only the preceeding time step.
\[ U(t_{n+1}) = U(t_{n})+ h U'(t_{n}) + \frac{h^2}{2}U''(t_{n})+ \frac{h^3}{3!}U'''(\xi_{n}) \]
and we throw away the $O(h^3)$ stuff (that's the local truncation error). Then we figure out a way to write that second derivative....
And so on! That's the idea of the higher-order Taylor methods.
The name "Taylor-2" suggests that we can simply derive these methods from Taylor's theorem, and that's what we saw last time. We simply include the second-derivative info, and we obtained
\[ U''(t_n) = \frac{\partial f}{\partial t}\bigg\rvert_{t_n,y_n}+ \frac{\partial f}{\partial y}\bigg\rvert_{t_n,y_n}f(t_n,y_n) \]
So, in this case, \[ \phi(t_n,y_n;h) = f(t_n,y_n) + \frac{h}{2}\left(\frac{\partial f}{\partial t}\bigg\rvert_{t_n,y_n}+ \frac{\partial f}{\partial y}\bigg\rvert_{t_n,y_n}f(t_n,y_n)\right) \]
That is, $\phi$ is Euler's step (Taylor-1), with an adjustment for the concavity of the function (represented by the second derivative).
Examples:
This code is not optimized for approximation, but shows the dependence of each successive step on $h$ (values are defined recursively).
If you increase $n$ you will eventually exceed the recursion limit of Mathematica.
A small change in how the $w$ values are computed makes it efficient for calculation.
Now we go about eliminating the derivatives, creating Runge-Kutta Methods. They are based on a clever observation about the multivariate Taylor series expansion.
Let's compare other estimates on Example 8.11:
Euler's | |
Taylor-2 | |
Runge Midpoint | |
Runga Trapezoidal | 1.583 |