Newcastle University School of Chemical Engineering and Advanced Materials
PRINCIPLES OF LINEAR LEAST SQUARES
by M. T. Tham
CONTENTS
Introduction
Problem Statement
Least-squares Solution

 

 

 

INTRODUCTION
Least-squares is one of the most commonly used methods in numerical compution. Essentially it is a technique for solving a set of equations where there are more equations than unknowns, i.e. an overdetermined set of equations. This set of notes shows the origins of a particular form of the algorithm, batch linear least-squares.
PROBLEM STATEMENT

Consider a general relationship between a dependent variable and n independent variables that is linear-in-the-parameters:

where

= the i'th observation of the dependent variable

= the i'th observation of the j'th independent variable

= the coefficient associated with the j'th independent variable

Say m sets of observations (measurements) of dependent and independent variables have been made, i.e.

                                                

This set of expressions can be rewritten in more compact form using matrix-vector notation as:

Letting

then

If the number of observations is equal to the number of unknown parameters, then X is a square matrix. If X is a square matrix, and if the inverse exists, the inverse can be computed directly and the unknown parameters are estimated according to:

Back to topwpe5.gif (958 bytes)

LEAST-SQUARES SOLUTION

However, it is usually the case that there are more observations than unknown parameters. In this case X is no longer a square matrix, and therefore,   does not exist. Since there are more equations than unknowns, this means that any solution will not be unique. Thus, we have to determine the 'best' and one way is to find a such that the sum of the squared difference between the observed dependent variable and its estimates is a minimum, namely,

which reads find a set of the unknowns, , such that the sum of squared difference between the estimates obtaind using , that is:

 

and the corresponding observed value, , is a minimum. This is therefore an optimisation problem where the objective is to find a that will set the sum of squared errors between observed and estimated values to a minimum. Solution of this problem yields the least squares estimates of as:

This can be verified easily as the following will show:

Back to topwpe5.gif (958 bytes)


Back to the Swot Shop
Copyright M.T. Tham (1996-1999)
Please email errors, comments or suggestions to ming.tham@ncl.ac.uk.