非线性求解方程组

  • 不动点迭代法、牛顿迭代法:两者之间的关系、收敛性、误差、稳定性
    • 会设计一个不动点
    • 牛顿法推广:和割线法的关系
    • 什么时候收敛
  • 方程组的牛顿迭代法:需要知道基本思想
  • 二分法:不排除

Fixed-point Iteration

  • fixed point: the real number is a fixed point of the function if
def fixed_point_iteration(f, x0, tol=1e-6, maxiter=100):
    x = x0
    for i in range(maxiter):
        x1 = f(x)
        if abs(x1 - x) < tol:
            return x1
        x = x1
    return x

The FPI function can be different.

For equation , there exists different FPI functions, along with different convergence speed.


different FPI functions, different convergence speed.
  • Linear convergence: Let denote the error at step of an iterative method. If

the method is said to obey linear convergence with rate S

Assume that is continuously differentiable, that , and that . Then Fixed-Point Iteration converges linearly with rate S to the fixed point for initial guesses sufficiently close to .

Convergent Types

  • Locally convergent: An iterative method is called locally convergent to if the method converges to r for initial guesses sufficiently close to

  • Globally convergent: For any , the method converges to for any initial guesses of .

The Bisection Method is guaranteed to converge linearly. Fixed-Point Iteration is only locally convergent, and when it converges it is linearly convergent. Both methods require one function evaluation per step. The bisection cuts uncertainty by for each step, compared with approximately for FPI. Therefore, Fixed-Point Iteration may be faster or slower than bisection, depending on whether S is smaller or larger than 1/2. Newton’s Method, a particularly refined version of FPI, where is designed to be zero

Cases when FPI works

  1. Function is continue in
    1. , then there exists fixed-point.
    2. Based on 1, if
      • exists only one global fixed-point.
      • Error satisfies

Local Convergence Theorem: Let be the fixed-point, if is continue in and .Then the FPI is locally convergent.


Link to original

Newton's Method

x_0 & =\text { initial guess } \ x_{i+1} & =x_i-\frac{f\left(x_i\right)}{f^{\prime}\left(x_i\right)} \text { for } i=0,1,2, \ldots \end{aligned}$$

If is a monotonically increasing convex function, and , then is the only root and for , Newton’s method will converges to .

Convergence Speed

Based on the Rate of convergence - Wikipedia, the order of convergence2 and the rate of convergence of a convergent sequence are quantities that represent how quickly the sequence approaches its limit. A sequence that converges to is said to have order of convergence and the rate of convergence if :

The rate of convergence is also called the asymptotic error constant.

  •  is called linear convergence if, and the sequence is said to _converge Q-linearly to .
  •  is called quadratic convergence.
  • is called cubic convergence.
  • Etc.

Empirically, if the function is convergent to ,and its differential , while then it’s said that has order convergence speed, and it’s local convergent. For FPI, we have:

Let and . If , then Newton’s Method is locally and quadratically convergent to . The error at step satisfies

If , which contains a root of multiplicity , then Newton’s Method’s error satisfies

converges locally and linearly to .

If , which contains a root of multiplicity , and , then Modified Newton’s Method

error satisfies

converges locally and quadratically to .


Examples of error approximation
Link to original

2

如果 , 并满足 , ,试分析标准牛顿迭代法的收敛性及误差。

化简得到:

即:


牛顿法求解非线性方程组

线性方程组的直接解法

  • 高斯消元法
    • 与 LU 分解的联系与用途
  • LU 三角分解
    • 使用高斯消元求解 LU 分解
    • LU 分解的用途:行列式
    • 部分主元概念
  • 范数:某一个题包含,不是重点

( )

Transclude of Chapter3--System-of-Equation#^g0ff5n

Transclude of Chapter3--System-of-Equation#pa--lu-factorization

线性方程组的迭代法

  • 分析性质:是否收敛
  • 设计+收敛性
  • 每一种迭代方法都很重要,随机出一个考察

The generative form of iterative methods:

Where is the Split Matrix. Rewrite it at fixed-point formula:

  • Richardson: : 单位严格行/列对角占优矩阵
  • Jacobi: :严格行对角占优
  • Gauss-Seidel: :严格行对角占优
  • SOR:

以上判断收敛为充分条件;充要条件使用迭代矩阵 的谱半径小于 1 判断。

对称正定线性方程组的解法

  • 对称正定矩阵的定义和性质
  • Cholesky 分解
  • 对称正定矩阵的 SOR 迭代

Transclude of Chapter3--System-of-Equation#methods-for-symmetric-positive-definite-matrices


Cholesky 另外一种分解

矩阵特征值的数值解法

  • 特征值估计:了解即可,理论部分不过多考察
  • 幂法:一定要掌握,+各种变形
  • QR 算法:必须,

Repeat step above until it stops. Finally the maximum eigenvalue is ,its corresponding eigenvector

Link to original

Other Kinds of Power Iteration


Other Kinds of Power Iteration

example of 位移逆幂法

假设我们要求解矩阵

的最大特征值和对应的特征向量。

具体步骤如下:

  1. 初始化一个初始向量 ,可以是任意非零向量。

  2. 对于每一次迭代 ,计算位移矩阵 ,其中 是一个位移量,通常选择接近所求特征值的值。

  3. 解线性方程组 ,得到新的迭代向量

  4. 归一化迭代向量 ,即

  5. 判断迭代是否收敛,如果满足收敛条件(如 ),则停止迭代,否则返回步骤 2。

下面我们以 为例,进行迭代计算。

首先,选择一个初始向量

迭代计算如下:

请注意,迭代过程中需要对迭代向量进行归一化,以保证迭代结果的正确性。

Link to original

![[Chapter7-Eigenvalues and singular values#|#^j17wmv]]

Example of Full QR Factorization.


Full QR Factorization

Householder Transformation

Since the transformation matrix in Householder transformation is symmetric orthogonal matrix, then we can replace with . See the example below.

Householder transformation in QR factorization.


Use Householder transformation to apply QR factorization.

Householder Transformation
Link to original

QR Algorithm

The way the QR algorithm finds eigenvalues of a matrix A is to locate a similar matrix whose eigenvalues are obvious. An example of the latter is real Schur form.

Real Schur form: A matrix has real Schur form if it is upper triangular, except possibly for blocks on the main diagonal. For example, a matrix of the form

Has real Schur form.


Theorems about QR algorithm

Let’s consider the matrix A:

To find the eigenvalues of using the QR algorithm, we follow these steps:

  1. Start by initializing as the given matrix .

  2. Compute the QR decomposition of . Let and be the resulting orthogonal and upper triangular matrices, respectively.

  3. Update .

  4. Repeat steps 2 and 3 until convergence or a desired number of iterations. Let be the matrix at the k-th iteration.

  5. The eigenvalues of A are the diagonal elements of the final upper triangular matrix .

Let’s go through the steps for our example:

Step 1:

Step 2: Compute the QR decomposition of :

Step 3: Update

Step 4 : Repeat steps 2 and 3 until convergence or a desired number of iterations.

After a few iterations, we obtain the following matrices:

Step 5: The eigenvalues of A are the diagonal elements of the final upper triangular matrix . In this case, the eigenvalues are:

So, these are the eigenvalues of the given matrix using the QR algorithm.

Link to original

插值

  • Language 插值,Newton 插值
  • 利用插值插其它函数
  • 基本的多项式插值与 Hermite 插值的区别与联系
  • 分段插值:自然三次样条插值

example of Newton's Interpolation

Transclude of Chapter4--Polynomial-Interpolation#chebyshev-interpolation

example of Hermite interpolation with Newton's differences


example of Hermite interpolation with Newton's differences

Transclude of Chapter4--Polynomial-Interpolation#example

最小二乘法


Least squares and QR decompostion

微分

  • 数值积分精度外推:给定一个近似,要求你把它的精度提高
  • 如何设计一个数值微分,达到给定精度给定解
  • 分析给定数值微分的误差

text title here...

积分

  • 设计外推数值积分
  • 分析给定的数值积分
  • GUASS积分的区间变换

梯形法则

积分公式:

误差:

代数精度:1

开 -梯形法则:

误差:

复合梯形法则

积分公式:

误差:

Simpson 法则

积分公式:

误差:

代数精度:3

三点开 -Newton-Cote’s 积分

误差:

复合 Simpson 法则

积分公式:

误差:

代数精度


代数精度求法

Romberg Integration

Romberg Integration

The main idea of Romberg integration is using Richardson Extrapolation to increase the accuracy of the integration.


text title here...

pseudocode of Romberg
Link to original

待定系数法


Undetermined Coefficients Method

高斯积分

Gauss Integration

Are the Newton–Cotes formulas optimal for their degree of precision, or can more powerful formulas be developed? In particular, if the requirement that evaluation points be evenly spaced is relaxed, are there better methods?

We pick out the most famous one to discuss in this section. Gaussian Quadrature has degree of precision when points are used, double that of Newton–Cotes.


text title here...

Integration error:

There are mainly three ways to find :

  1. Let ,
  2. Given linearly independent functions in interval , build orthogonality function using Gram-Schmidt
  3. Legendre polynomials

We usually use Legendre polynomials to compute numerical integration.


Gaussian Quadrature coefficients

approximate integrals on a general interval


approximate integrals on a general interval
Link to original