Normal Equation for optimization
Like gradient descent, "Normal Equation" method is another way of minimizing . In the "Normal Equation" method, we will minimize by explicitly taking its derivatives with respect to the ’s, and setting them to zero. This allows us to find the optimum theta without iteration.
The cost function is:
Find , which minimizes cost function : (for every )
m x (n+1) matrix
The normal equation formula is given below:
pinv(X' * X) * X' * y
There is no need to do feature scaling with the normal equation.
The following is a comparison of gradient descent and the normal equation:
|Gradient Descent||Normal Equation|
|Need to choose learning rate||No need to choose|
|Needs many iterations||No need to iterate|
|, need to calculate inverse of|
|Works well even if is large||Slow if is very large|
In practice, when exceeds 10,000 it might be a good time to go from a normal equation solution to an iterative process.
When implementing the normal equation in Octave/MATLAB, we want to use the
pinv function rather than
pinv function will give you a value of even if is not invertible.
If is non-invertible, the common causes might be having:
- Redundant features, where two features are very closely related (i.e. they are linearly dependent). In this case, delete a feature that is linearly dependent with another.
- Too many features (e.g. m ≤ n). In this case, delete one or more features or use "regularization" (to be explained in a later lesson).