Fitting Data (Curve Fitting Toolbox) (2024)

Curve Fitting Toolbox

Evaluating the Goodness of Fit

After fitting data with one or more models, you should evaluate the goodness of fit. A visual examination of the fitted curve displayed in the Curve Fitting Tool should be your first step. Beyond that, the toolbox provides these goodness of fit measures for both linear and nonlinear parametric fits:

  • Residuals
  • Goodness of fit statistics
  • Confidence and prediction bounds

You can group these measures into two types: graphical and numerical. The residuals and prediction bounds are graphical measures, while the goodness of fit statistics and confidence bounds are numerical measures.

Generally speaking, graphical measures are more beneficial than numerical measures because they allow you to view the entire data set at once, and they can easily display a wide range of relationships between the model and the data. The numerical measures are more narrowly focused on a particular aspect of the data and often try to compress that information into a single number. In practice, depending on your data and analysis requirements, you might need to use both types to determine the best fit.

Note that it is possible that none of your fits can be considered the best one. In this case, it might be that you need to select a different model. Conversely, it is also possible that all the goodness of fit measures indicate that a particular fit is the best one. However, if your goal is to extract fitted coefficients that have physical meaning, but your model does not reflect the physics of the data, the resulting coefficients are useless. In this case, understanding what your data represents and how it was measured is just as important as evaluating the goodness of fit.

Residuals

The residuals from a fitted model are defined as the differences between the response data and the fit to the response data at each predictor value.

  • residual = data - fit

You display the residuals in the Curve Fitting Tool by selecting the menu item View->Residuals.

Mathematically, the residual for a specific predictor value is the difference between the response value y and the predicted response value Fitting Data (Curve Fitting Toolbox) (3).

  • Fitting Data (Curve Fitting Toolbox) (4)

Assuming the model you fit to the data is correct, the residuals approximate the random errors. Therefore, if the residuals appear to behave randomly, it suggests that the model fits the data well. However, if the residuals display a systematic pattern, it is a clear sign that the model fits the data poorly.

A graphical display of the residuals for a first degree polynomial fit is shown below. The top plot shows that the residuals are calculated as the vertical distance from the data point to the fitted curve. The bottom plot shows that the residuals are displayed relative to the fit, which is the zero line.

Fitting Data (Curve Fitting Toolbox) (5)

The residuals appear randomly scattered around zero indicating that the model describes the data well.

A graphical display of the residuals for a second-degree polynomial fit is shown below. The model includes only the quadratic term, and does not include a linear or constant term.

Fitting Data (Curve Fitting Toolbox) (6)

The residuals are systematically positive for much of the data range indicating that this model is a poor fit for the data.

Goodness of Fit Statistics

After using graphical methods to evaluate the goodness of fit, you should examine the goodness of fit statistics. The Curve Fitting Toolbox supports these goodness of fit statistics for parametric models:

  • The sum of squares due to error (SSE)
  • R-square
  • Adjusted R-square
  • Root mean squared error (RMSE)

For the current fit, these statistics are displayed in the Results list box in the Fit Editor. For all fits in the current curve-fitting session, you can compare the goodness of fit statistics in the Table of fits.

Sum of Squares Due to Error. This statistic measures the total deviation of the response values from the fit to the response values. It is also called the summed square of residuals and is usually labeled as SSE.

  • Fitting Data (Curve Fitting Toolbox) (7)

A value closer to 0 indicates a better fit. Note that the SSE was previously defined in The Least Squares Fitting Method.

R-Square. This statistic measures how successful the fit is in explaining the variation of the data. Put another way, R-square is the square of the correlation between the response values and the predicted response values. It is also called the square of the multiple correlation coefficient and the coefficient of multiple determination.

R-square is defined as the ratio of the sum of squares of the regression (SSR) and the total sum of squares (SST). SSR is defined as

  • Fitting Data (Curve Fitting Toolbox) (8)

SST is also called the sum of squares about the mean, and is defined as

  • Fitting Data (Curve Fitting Toolbox) (9)

where SST = SSR + SSE. Given these definitions, R-square is expressed as

  • Fitting Data (Curve Fitting Toolbox) (10)

R-square can take on any value between 0 and 1, with a value closer to 1 indicating a better fit. For example, an R2 value of 0.8234 means that the fit explains 82.34% of the total variation in the data about the average.

If you increase the number of fitted coefficients in your model, R-square might increase although the fit may not improve. To avoid this situation, you should use the degrees of freedom adjusted R-square statistic described below.

Note that it is possible to get a negative R-square for equations that do not contain a constant term. If R-square is defined as the proportion of variance explained by the fit, and if the fit is actually worse than just fitting a horizontal line, then R-square is negative. In this case, R-square cannot be interpreted as the square of a correlation.

Degrees of Freedom Adjusted R-Square. This statistic uses the R-square statistic defined above, and adjusts it based on the residual degrees of freedom. The residual degrees of freedom is defined as the number of response values n minus the number of fitted coefficients m estimated from the response values.

  • Fitting Data (Curve Fitting Toolbox) (11)

v indicates the number of independent pieces of information involving the n data points that are required to calculate the sum of squares. Note that if parameters are bounded and one or more of the estimates are at their bounds, then those estimates are regarded as fixed. The degrees of freedom is increased by the number of such parameters.

The adjusted R-square statistic is generally the best indicator of the fit quality when you add additional coefficients to your model.

  • Fitting Data (Curve Fitting Toolbox) (12)

The adjusted R-square statistic can take on any value less than or equal to 1, with a value closer to 1 indicating a better fit.

Root Mean Squared Error. This statistic is also known as the fit standard error and the standard error of the regression

  • Fitting Data (Curve Fitting Toolbox) (13)

where MSE is the mean square error or the residual mean square

  • Fitting Data (Curve Fitting Toolbox) (14)

A RMSE value closer to 0 indicates a better fit.

Confidence and Prediction Bounds

With the Curve Fitting Toolbox, you can calculate confidence bounds for the fitted coefficients, and prediction bounds for new observations or for the fitted function. Additionally, for prediction bounds, you can calculate simultaneous bounds, which take into account all predictor values, or you can calculate nonsimultaneous bounds, which take into account only individual predictor values. The confidence bounds are numerical, while the prediction bounds are displayed graphically.

The available confidence and prediction bounds are summarized below.

Table 3-2: Types of Confidence and Prediction Bounds
Interval Type
Description
Fitted coefficients
Confidence bounds for the fitted coefficients
New observation
Prediction bounds for a new observation (response value)
New function
Prediction bounds for a new function value

    Note Prediction bounds are often described as confidence bounds because you are calculating a confidence interval for a predicted response.

Confidence and prediction bounds define the lower and upper values of the associated interval, and define the width of the interval. The width of the interval indicates how uncertain you are about the fitted coefficients, the predicted observation, or the predicted fit. For example, a very wide interval for the fitted coefficients can indicate that you should use more data when fitting before you can say anything very definite about the coefficients.

The bounds are defined with a level of certainty that you specify. The level of certainty is often 95%, but it can be any value such as 90%, 99%, 99.9%, and so on. For example, you might want to take a 5% chance of being incorrect about predicting a new observation. Therefore, you would calculate a 95% prediction interval. This interval indicates that you have a 95% chance that the new observation is actually contained within the lower and upper prediction bounds.

Calculating and Displaying Confidence Bounds. The confidence bounds for fitted coefficients are given by

  • Fitting Data (Curve Fitting Toolbox) (15)

where b are the coefficients produced by the fit, t is the inverse of Student's T cumulative distribution function, and S is a vector of the diagonal elements from the covariance matrix of the coefficient estimates, (XTX)-1s2. X is the design matrix, XT is the transpose of X, and s2 is the mean squared error.

Refer to the tinv function, included with the Statistics Toolbox, for a description of t. Refer to Linear Least Squares for more information about X and XT.

The confidence bounds are displayed in the Results list box in the Fit Editor using the following format.

  • p1 = 1.275 (1.113, 1.437)

The fitted value for the coefficient p1 is 1.275, the lower bound is 1.113, the upper bound is 1.437, and the interval width is 0.324. By default, the confidence level for the bounds is 95%. You can change this level to any value with the View->Confidence Level menu item in the Curve Fitting Tool.

You can calculate confidence intervals at the command line with the confint function.

Calculating and Displaying Prediction Bounds. As mentioned previously, you can calculate prediction bounds for a new observation or for the fitted curve. In both cases, the prediction is based on an existing fit to the data. Additionally, the bounds can be simultaneous and measure the confidence for all predictor values, or they can be nonsimultaneous and measure the confidence only for a single predetermined predictor value. If you are predicting a new observation, nonsimultaneous bounds measure the confidence that the new observation lies within the interval given a single predictor value. Simultaneous bounds measure the confidence that a new observation lies within the interval regardless of the predictor value.

The nonsimultaneous prediction bounds for a new observation at the predictor value x are given by

  • Fitting Data (Curve Fitting Toolbox) (16)

where s2 is the mean squared error, t is the inverse of Student's T cumulative distribution function, and S is the covariance matrix of the coefficient estimates, (XTX)-1s2. Note that x is defined as a row vector of the Jacobian evaluated at a specified predictor value.

The simultaneous prediction bounds for a new observation and for all predictor values are given by

  • Fitting Data (Curve Fitting Toolbox) (17)

where f is the inverse of the F cumulative distribution function. Refer to the finv function, included with the Statistics Toolbox, for a description of f.

The nonsimultaneous prediction bounds for the function at a single predictor value x are given by

  • Fitting Data (Curve Fitting Toolbox) (18)

The simultaneous prediction bounds for the function and for all predictor values are given by

  • Fitting Data (Curve Fitting Toolbox) (19)

You can graphically display prediction bounds two ways: using the Curve Fitting Tool or using the Analysis GUI. With the Curve Fitting Tool, you can display nonsimultaneous prediction bounds for new observations with View->Prediction Bounds. By default, the confidence level for the bounds is 95%. You can change this level to any value with View->Confidence Level. With the Analysis GUI, you can display nonsimultaneous prediction bounds for the function or for new observations.

You can display numerical prediction bounds of any type at the command line with the predint function.

To understand the quantities associated with each type of prediction interval, recall that the data, fit, and residuals (random errors) are related through the formula

  • data = fit + residuals

Suppose you plan to take a new observation at the predictor value xn+1. Call the new observation yn+1(xn+1) and the associated error en+1. Then yn+1(xn+1) satisfies the equation

  • Fitting Data (Curve Fitting Toolbox) (20)

where f(xn+1) is the true but unknown function you want to estimate at xn+1. The likely values for the new observation or for the estimated function are provided by the nonsimultaneous prediction bounds.

If instead you want the likely value of the new observation to be associated with any predictor value, the previous equation becomes

  • Fitting Data (Curve Fitting Toolbox) (21)

The likely values for this new observation or for the estimated function are provided by the simultaneous prediction bounds.

The types of prediction bounds are summarized below.

Table 3-3: Types of Prediction Bounds
Type of Bound
Associated Equation
Observation
Nonsimultaneous
yn+1(xn+1)
Simultaneous
yn+1(x), globally for any x
Function
Nonsimultaneous
f(xn+1)
Simultaneous
f(x), simultaneously for all x

The nonsimultaneous and simultaneous prediction bounds for a new observation and the fitted function are shown below. Each graph contains three curves: the fit, the lower confidence bounds, and the upper confidence bounds. The fit is a single-term exponential to generated data and the bounds reflect a 95% confidence level. Note that the intervals associated with a new observation are wider than the fitted function intervals because of the additional uncertainty in predicting a new response value (the fit plus random errors).

Fitting Data (Curve Fitting Toolbox) (22)

Example: Evaluating the Goodness of Fit

This example fits several polynomial models to generated data and evaluates the goodness of fit. The data is cubic and includes a range of missing values.

  • rand('state',0)x = [1:0.1:3 9:0.1:10]';c = [2.5 -0.5 1.3 -0.1]; y = c(1) + c(2)*x + c(3)*x.^2 + c(4)*x.^3 + (rand(size(x))-0.5);

After you import the data, fit it using a cubic polynomial and a fifth degree polynomial. The data, fits, and residuals are shown below. You display the residuals in the Curve Fitting Tool with the View->Residuals menu item.

Fitting Data (Curve Fitting Toolbox) (23)

Both models appear to fit the data well, and the residuals appear to be randomly distributed around zero. Therefore, a graphical evaluation of the fits does not reveal any obvious differences between the two equations.

The numerical fit results are shown below.

Fitting Data (Curve Fitting Toolbox) (24)

As expected, the fit results for poly3 are reasonable because the generated data is cubic. The 95% confidence bounds on the fitted coefficients indicate that they are acceptably accurate. However, the 95% confidence bounds for poly5 indicate that the fitted coefficients are not known accurately.

The goodness of fit statistics are shown below. By default, the adjusted R-square and RMSE statistics are not displayed in the Table of Fits. To display these statistics, open the Table Options GUI by clicking the Table options button. The statistics do not reveal a substantial difference between the two equations.

Fitting Data (Curve Fitting Toolbox) (25)

The 95% nonsimultaneous prediction bounds for new observations are shown below. To display prediction bounds in the Curve Fitting Tool, select the View->Prediction Bounds menu item. Alternatively, you can view prediction bounds for the function or for new observations using the Analysis GUI.

Fitting Data (Curve Fitting Toolbox) (26)

The prediction bounds for poly3 indicate that new observations can be predicted accurately throughout the entire data range. This is not the case for poly5. It has wider prediction bounds in the area of the missing data, apparently because the data does not contain enough information to estimate the higher degree polynomial terms accurately. In other words, a fifth-degree polynomial overfits the data. You can confirm this by using the Analysis GUI to compute bounds for the functions themselves.

The 95% prediction bounds for poly5 are shown below. As you can see, the uncertainty in estimating the function is large in the area of the missing data. Therefore, you would conclude that more data must be collected before you can make accurate predictions using a fifth-degree polynomial.

Fitting Data (Curve Fitting Toolbox) (27)

In conclusion, you should examine all available goodness of fit measures before deciding on the best fit. A graphical examination of the fit and residuals should always be your initial approach. However, some fit characteristics are revealed only through numerical fit results, statistics, and prediction bounds.

Specifying Fit OptionsExample: Rational Fit

© 1994-2004 The MathWorks, Inc. - Trademarks - Privacy Policy

Fitting Data (Curve Fitting Toolbox) (2024)
Top Articles
Latest Posts
Article information

Author: Neely Ledner

Last Updated:

Views: 6625

Rating: 4.1 / 5 (62 voted)

Reviews: 85% of readers found this page helpful

Author information

Name: Neely Ledner

Birthday: 1998-06-09

Address: 443 Barrows Terrace, New Jodyberg, CO 57462-5329

Phone: +2433516856029

Job: Central Legal Facilitator

Hobby: Backpacking, Jogging, Magic, Driving, Macrame, Embroidery, Foraging

Introduction: My name is Neely Ledner, I am a bright, determined, beautiful, adventurous, adventurous, spotless, calm person who loves writing and wants to share my knowledge and understanding with you.