Divided Differences
We start with the interpolating polynomial of degree n given in Newton form:
where the coefficients are to be determined. If all the , then we have the Maclaurin expansion of the polynomial; if all for some constant , then we have the Taylor series expansion of about c. In either event, the coefficients will be scaled derivatives at a single point.
We're now interested in the case where the are different (possibly simply equally spaced points); the coefficients will still be related to derivative information, but that information will be distributed across the , rather than focused on a point. The key to computing them will be divided differences.
Divided differences are basically approximations to derivatives, as one can see from Theorem 3.6.
This is a somewhat ``classical'' subject (old-fashioned?): one used to consult tables to evaluate functions, whereas today we've got computers which have been programmed to do the job for us. Textbooks used to include lots of tables (e.g. of trigonometric functions), and if you wanted , you'd look in the table, find the values of and , and interpolate! Nowadays, this is rarer, but you may have still had to do such things in a statistics class, for example, where tables of normal probabilities or t-distribution values are still used....
is the degree interpolating polynomial.
Theorem 3.6: Suppose that and are distinct numbers in [a,b]. Then
Derivation of the coefficients of the Newton polynomial: Rather than define the divided differences as above, we could generate a recursive definition for them.
Define the term as the leading coefficient of :
Since
where is the interpolating polynomial to , and is the interpolating polynomial to , we can compute the leading coefficient of as a function of the leading coefficients of and :
Hence,
When you have a recursive definition, you need a basement: it is the leading coefficient of the constant interpolating function:
Conclusion: the Newton interpolating polynomial is given by
To get the polynomial's coefficients, you simply look along the diagonal of the divided difference table. Computation of the coefficients costs
operations, whereas evaluation involves 4n-1 operations. While this seems expensive compared to Horner's method, we generate a succession of estimates (using the 0th through nth degree polynomials). So there is bang for the buck....
This form of the interpolating polynomial is nice because we can easily increase the degree by adding an additional knot, without much additional work. We can reuse the interpolating polynomial of previous degree. Furthermore, the knots don't have to be added at the end or beginning: this process was independent of the order of the .
By following the table of divided differences up from the function values, we get a look at the estimated value of the function using higher and higher powered polynomials. We hope that if the values are settling down, that we're doing a pretty good job of approximating it (and that we needn't proceed to even higher degree). If we're not settling down, however, we know that with the Newton formulation, we can do even better.
If you're at the ``left hand side'' (near ) of the table, then it makes sense to make use of the forward-differences; if you're at the right of the table, then use the backward-differences. The only advantage of using one side versus the other is in watching the value of P(x) stabilize as additional degreed polynomials are used; using either form gives the same value for the highest degreed approximation. So the authors' injunction that ``The Newton formulas are not appropriate for approximating f(x) when x lies near the center of the table....'' are a little draconian.... Use them!