Euler's Method
This first method for solving a well-posed initial value problem (IVP) is as simple as can be: we approximate a derivative with a finite difference. That's it!
Alternatively, Euler's method can be derived using the Taylor series expansion, and that is perhaps a better approach, since it can be generalized, and since it can be studied for the magnitude of the error we're going to be making.
in which case h=(b-a)/N.
So, using Taylor, we have that
and, since y satisfies the differential equation,
We simply drop the error term, and hope that we don't make too bad a mistake, to generate the succession of iterates
This is a difference equation associated with the given differential equation. Its solution, we hope, will be relatively close to the solution of the IVP. Hope aside, how bad can things get? What's the worst that can happen? The answer is in the following theorem:
Theorem 5.9 (error bound): Suppose f is continuous and satisfies a Lipschitz condition with constant L on
and that a constant M exists with
Let y(t) denote the unique solution to the IVP
and be the Euler approximations. Then, for each ,
One obvious strategy for improving our approximations is to make h tremendously small. This may backfire, however, due to round-off error.