We use a leave-one-out procedure called "Cross-Validation" to determine which method is best in a given case: that is, a true value is removed from the data set, and an estimate is then made at that site using the given method to obtain measures of "goodness-of-fit".
Ideally we would get a perfect fit (i.e., if the actual value is 5, say, then the method would guess 5), but we never do in practice. (A perfect fit would result in all the data falling on the straight line.)
We then compare the results of using the various methods, and select that which gives the best results. We would like our cross-validation results to satisfy the following criteria:
-
that the estimates have statistics (e.g. mean, variance) similar to those of the true values;
-
that the difference of the true and the estimated values should have mean zero (unbiasedness); and
-
that the absolute value of the difference of the true and estimated values
should be small.
Click to continue... to get back to the list of exhibitors...