Eigenvalues and Singular Values

Gauss of Eigenvalues


Gershgorin theorem

Gershgorin theorem 2

example of eigenvalues guess using Gershaorin theorem


Example

Power Iteration

The motivation behind Power Iteration is that multiplication by a matrix tends to move vectors toward the dominant eigenvector direction.

Power Iteration is essentially a fixed-point iteration with normalization at each step. Like FPI, it converges linearly at rate of

Let A be an matrix with real eigenvalues satisfying . Assume that the eigenvectors of span .

For almost every initial vector, Power Iteration converges linearly to an eigenvector associated to with convergence rate constant .

example of Power iteration method

Suppose we have matrix in which we want to get its max eigenvalue:

Firstly, choose a initial guess vector ,Say

The power iteration method::

  1. Compute
  2. Compute
  3. Normalization

Repeat the following step until it satisfies some rule. (For example, )

Choose a initial guess vector:

Step 1

Step 2

Step 3 Normalization

Repeat:

Normalization

Repeat step above until it stops. Finally the maximum eigenvalue is ,its corresponding eigenvector

Other Kinds of Power Iteration


Other Kinds of Power Iteration

example of 位移逆幂法

假设我们要求解矩阵

的最大特征值和对应的特征向量。

具体步骤如下:

  1. 初始化一个初始向量 ,可以是任意非零向量。

  2. 对于每一次迭代 ,计算位移矩阵 ,其中 是一个位移量,通常选择接近所求特征值的值。

  3. 解线性方程组 ,得到新的迭代向量

  4. 归一化迭代向量 ,即

  5. 判断迭代是否收敛,如果满足收敛条件(如 ),则停止迭代,否则返回步骤 2。

下面我们以 为例,进行迭代计算。

首先,选择一个初始向量

迭代计算如下:

请注意,迭代过程中需要对迭代向量进行归一化,以保证迭代结果的正确性。

Rayleigh Power Iteration

The Rayleigh quotient can be used in conjunction with Inverse Power Iteration. We know that it converges to the eigenvector associated to the eigenvalue with the smallest distance to the shift s, and that convergence is fast if this distance is small. If at any step along the way an approximate eigenvalue were known, it could be used as the shift s, to speed convergence.

Procedure of Rayleigh Power Iteration

Given initial vector

If A is real symmetric matrix, then Rayleigh power iteration’s convergence speed is

QR Decomposition

The goal of this section is to develop methods for finding all eigenvalues at once. We begin with a method that works for symmetric matrices, and later supplement it to work in general. Symmetric matrices are easiest to handle because their eigenvalues are real and their unit eigenvectors form an orthonormal basis of . This motivates applying Power Iteration with vectors in parallel, where we actively work at keeping the vectors orthogonal to one another.

Reduced QR Decomposition

If matrix whose column vectors is linear independent, then we can apply Gram-Schmidt Orthogonalization.

Gram-Schmidt Orthogonalization

Then rewrite it in matrix form:

Actually, to avoid swamping as we talk in Chapter 3 -System of Equation, one alternative is to normalize the orthogonal vectors. See the example below.

Example of QR factorization

Full QR Decomposition


Difference between the two QR factorization

The modified row and column can be add by adding new linearly independent unit vector and np.zeros rows to the matrix :

Example of Full QR Factorization.


Full QR Factorization

Modified Gram-Schmidt Orthogonalization

The Classical Gram-Schmidt Orthogonalization may encounter some problem in real world problems such as:

  1. Not suitable for sparse matrix;
  2. There will be a lot of errors in the ill-conditioned matrix with finite precision;
  3. The whole matrix must be saved during the loop, which leads to high memory overhead.

See the Example below:

Example When Classical Gram-Schmidt Orthogonalization Failed


The case of ill-conditioned matrix

The main improvement of MGS is to select a vector as the first datum, and then project it and all the vectors into the orthogonal space of the datum, the first datum is then perpendicular to all the vectors in the orthogonal space. So, after that, we just have to do the same thing for the rest of the vectors in the orthogonal space, and then all the vectors are orthogonal to each other.

MGS


MGS Procedure

Householder Transformation

Since the transformation matrix in Householder transformation is symmetric orthogonal matrix, then we can replace with . See the example below.

Householder transformation in QR factorization.


Use Householder transformation to apply QR factorization.

Householder Transformation

QR Algorithm

The way the QR algorithm finds eigenvalues of a matrix A is to locate a similar matrix whose eigenvalues are obvious. An example of the latter is real Schur form.

Real Schur form: A matrix has real Schur form if it is upper triangular, except possibly for blocks on the main diagonal. For example, a matrix of the form

Has real Schur form.


Theorems about QR algorithm

Let’s consider the matrix A:

To find the eigenvalues of using the QR algorithm, we follow these steps:

  1. Start by initializing as the given matrix .

  2. Compute the QR decomposition of . Let and be the resulting orthogonal and upper triangular matrices, respectively.

  3. Update .

  4. Repeat steps 2 and 3 until convergence or a desired number of iterations. Let be the matrix at the k-th iteration.

  5. The eigenvalues of A are the diagonal elements of the final upper triangular matrix .

Let’s go through the steps for our example:

Step 1:

Step 2: Compute the QR decomposition of :

Step 3: Update

Step 4 : Repeat steps 2 and 3 until convergence or a desired number of iterations.

After a few iterations, we obtain the following matrices:

Step 5: The eigenvalues of A are the diagonal elements of the final upper triangular matrix . In this case, the eigenvalues are:

So, these are the eigenvalues of the given matrix using the QR algorithm.