In this chapter, we consider the problem of solving several simultaneous Equations in several variables. Most of our attention will be paid to the case where the Number of equations and the number of unknown variables are the same.

Gaussian Elimination

We begin by describing the simplest form of Gaussian elimination. In fact, it is so Simple that it is not guaranteed to proceed to completion, let alone find an accurate Solution. The modifications that will be needed to improve the “naive” method will be Introduced beginning in the next section. Three useful operations can be applied to a linear system of equations that yield An equivalent system, meaning one that has the same solutions. These operations are As follows:

  1. Swap one equation for another.
  2. Add or subtract a multiple of one equation from another.
  3. Multiply an equation by a nonzero constant.

Example of GE

Apply Gaussian elimination in tableau form for the system of three equations in three unknowns:

This is written in tableau form as

Two steps are needed to eliminate column 1: and one more step to eliminate column 2: Returning to the equations

we can solve for the variables

and solve for in that order. The latter part is called back substitution, or back solving because, after elimination, the equations are readily solved from the bottom up. The solution is .

Generative Form

For system of equations like:

We use these update-formula to update the matrix:

After the elimination is completed, the tableau is upper triangular:

After elimination, the equations are readily solved from the bottom up using back solving.

LU Factorization

Carrying the idea of tableau form one step farther brings us to the matrix form of a system of equations. Matrix form will save time in the long run by simplifying the algorithms and their analysis.

The factorization is a matrix representation of Gaussian elimination. It consists of writing the coefficient matrix as a product of a lower triangular matrix and an upper triangular matrix . The factorization is the Gaussian elimination version of a long tradition in science and engineering—breaking down a complicated object into simpler parts.

  • LU matrix An matrix is lower triangular if its entries satisfy for . An matrix is upper triangular if its entries satisfy for

Example of LU Factorization

Find the LU factorization of

This matrix is the matrix of coefficients of system (2.4). The elimination steps proceed as before:

The lower triangular matrix is formed, as in the previous example, by putting 1 ‘s on the main diagonal and the multipliers in the lower triangle-in the specific places they were used for elimination. That is,

Two Forms of LU Factorization

Doolittle Factorization

Groot Factorization

The fact that not all matrices have an LU factorization means that more work is required before we can declare the LU factorization a general Gaussian elimination algorithm. The related problem of swamping is described in the next section. The factorization is introduced, which will overcome both problems.

Sources to Error

Let be an approximate solution of the linear system . The residual is the vector . The backward error is the norm of the residual , and the forward error is .( )

Denote the residual by . The relative backward error of system is defined to be

and the relative forward error is

The error magnification factor for is the ratio of the two:

The condition number of a square matrix , , is the maximum possible error magnification factor for solving , over all right-hand sides . Namely:

For an invertible matrix , its condition number is

Swamping

A second significant source of error in classical Gaussian elimination is much easier to fix. We demonstrate swamping with the next example.

Gaussian elimination should be kept as small as possible to avoid swamping. Fortunately, there is a simple modification to native Gaussian elimination that forces the absolute value of multipliers to be no larger than 1. This new protocol, which involves judicious row exchanges in the tableau, is called partial pivoting.


Partical pivoting gaussian elimination

Example of PPGE

Apply Gaussian elimination with partial pivoting to solve the system

This example is written in tableau form as

Under partial pivoting we compare with and , and choose for the new pivot. This is achieved through an exchange of rows 1 and 3:

Before eliminating column 2 we must compare the current with the current . Because the latter is larger, we again switch rows:

Note that all three multipliers are less than 1 in absolute value. The equations are now simple to solve. From

we find that .

PA = LU Factorization

we put together everything we know about Gaussian elimination into the PA = LU factorization. This is the matrix formulation of elimination with partial pivoting. The factorization is the established workhorse for solving systems of linear equations.


Example of find the PA=LU matrix of A

Example of PA=LU to solve system of equations

Use the factorization to solve the system , where

It remains to complete the two back substitutions.

  1. :

Starting at the top, we have

Starting at the bottom,

Therefore, the solution is .

Iterative Methods

The generative form of iterative methods:

Where is the Split Matrix. Rewrite it at fixed-point formula:

Richardson

Example of Richardson iterative method.

, the coefficient matrix : Richardson iterative method:


Jacobi

Where is the diagonal matrix.

Example of Jacobi Iterative Method

, the correspond coefficient matrix : Jacobi Iterative method:


Relation between Jacobi and Richardson method

Gauss-Seidel

Example of Gauss-Seidel Method

, the coefficient matrix : Gauss-Seidel iterative method:


Connection between Gauss-Seidel and Jacobi method

SOR

For generative linear iterative method:

We can get a new iterative method by using relaxation method:

Namely:

SOR iterative method:

Example of Gauss-Seidel Method

, the coefficient matrix : Gauss-Seidel iterative method:

SOR iterative method:

Convergence of Iterative Methods

If the iterative matrix ‘s norm (of any kind) , then is convergent to the solution of , also:

Above is exactly what strict diagonal Dominance implies

For , if is a strict diagonal matrix, then we have:

  1. If is Dominance, Richardson iterative method is convergent.
  2. If is strict row diagonal matrix, Jacobi iterative method is convergent.

Convergence of Richardson iterative method.

Convergence of Jacobi iterative method.

spectral radius method (spectral radius=$\sqrt{\max{\lambda_i}}$)

Convergence Judgment Using Spectral Radius method.

Example of Spectral Radius Method


fix: $\rho(G)>1$

Methods for Symmetric Positive-definite Matrices

  • Definition(Symmetric positive-definite matrixes) The matrix is symmetric if ; A is positive-definite if for all vector .

If the matrix is symmetric, then is positive-definite if and only if all of its eigenvalues are positive.

Property

  • Symmetric positive-definite matrixes is nonsingular.
  • If is symmetric positive-definite matrix, is a full-rank matrix, then is also a symmetric positive-definite matrix.

Cholesky Factorization

  • Cholesky Factorization Theorem If is a symmetric positive-definite matrix, then there exists an upper triangular matrix such that .

Cholesky factorization

for k =1, 2, ... n
	if A(k,k) < 0, stop, end
	R(k,k)=math.sqrt(A(k,k))
	u=A(k, k+1:n)/R(k,k)
	R(k, k+1:n)=u
	A(k+1: n,k+1:n)-=np.dot(np.reverse(u),u)
end

Numerical Example

SOR Iterative Method of Symmetric Positive-definite Matrix

If is a symmetric positive-definite matrix, then the SOR iterative method for solving is convergent for any initial guess.

Conjugate Gradient Method

  • A-Conjugate Let be a symmetric positive-definite matrix. For two n-vectors and , define the A-inner product The vectors and are A-conjugate if .

code of CGM