|
|
Line 1: |
Line 1: |
| ===Linear Regression (Least Squares)===
| | #REDIRECT [[Gompertz_Models#Linear_Regression_.28Least_Squares.29]] |
| The method of least squares requires that a straight line be fitted to a set of data points. If the regression is on <math>Y</math> , then the sum of the squares of the vertical deviations from the points to the line is minimized. If the regression is on <math>X</math> , the line is fitted to a set of data points such that the sum of the squares of the horizontal deviations from the points to the line is minimized. To illustrate the method, this section presents a regression on <math>Y</math> . Consider the linear model [2]:
| |
| | |
| ::<math>{{Y}_{i}}={{\widehat{\beta }}_{0}}+{{\widehat{\beta }}_{1}}{{X}_{i1}}+{{\widehat{\beta }}_{2}}{{X}_{i2}}+...+{{\widehat{\beta }}_{p}}{{X}_{ip}}</math>
| |
| | |
| <br>
| |
| or in matrix form where bold letters indicate matrices:
| |
| <br>
| |
| ::<math>Y=X\beta </math>
| |
| <br>
| |
| :where:
| |
| <br>
| |
| ::<math>Y=\left[ \begin{matrix}
| |
| {{Y}_{1}} \\
| |
| {{Y}_{2}} \\
| |
| \vdots \\
| |
| {{Y}_{N}} \\
| |
| \end{matrix} \right]</math>
| |
| | |
| ::<math>X=\left[ \begin{matrix}
| |
| 1 & {{X}_{1,1}} & \cdots & {{X}_{1,p}} \\
| |
| 1 & {{X}_{2,1}} & \cdots & {{X}_{2,p}} \\
| |
| \vdots & \vdots & \ddots & \vdots \\
| |
| 1 & {{X}_{N,1}} & \cdots & {{X}_{N,p}} \\
| |
| \end{matrix} \right]</math>
| |
| <br>
| |
| :and:
| |
| <br>
| |
| ::<math>\beta =\left[ \begin{matrix}
| |
| {{\beta }_{0}} \\
| |
| {{\beta }_{1}} \\
| |
| \vdots \\
| |
| {{\beta }_{p}} \\
| |
| \end{matrix} \right]</math>
| |
| | |
| The vector <math>\beta </math> holds the values of the parameters. Now let <math>\widehat{\beta }</math> be the estimates of these parameters, or the regression coefficients. The vector of estimated regression coefficients is denoted by:
| |
| | |
| ::<math>\widehat{\beta }=\left[ \begin{matrix}
| |
| {{\widehat{\beta }}_{0}} \\
| |
| {{\widehat{\beta }}_{1}} \\
| |
| \vdots \\
| |
| {{\widehat{\beta }}_{p}} \\
| |
| \end{matrix} \right]</math>
| |
| | |
| Solving for <math>\beta </math> in Eqn. (linear) requires the analyst to left multiply both sides by the transpose of <math>X</math> , <math>{{X}^{T}}</math> :
| |
| | |
| ::<math>({{X}^{T}}X)\widehat{\beta }={{X}^{T}}Y</math>
| |
| | |
| Now the term <math>({{X}^{T}}X)</math> becomes a square and invertible matrix. Then taking it to the other side of the equation gives:
| |
| | |
| ::<math>\widehat{\beta }={{(}^{T}}^{-1}{{X}^{T}}Y</math>
| |