fixed point: the real number r is a fixed point of the function g if g(r)=r
def fixed_point_iteration(f, x0, tol=1e-6, maxiter=100): x = x0 for i in range(maxiter): x1 = f(x) if abs(x1 - x) < tol: return x1 x = x1 return x
The FPI function can be different.
For equation x3+x−1=0, there exists different FPI functions, along with different convergence speed.
different FPI functions, different convergence speed.
Linear convergence: Let εi=2εi−1 denote the error at step i of an iterative method. If
i→∞limεiεi+1=S<1
the method is said to obey linear convergence with rate S
Assume that g is continuously differentiable, that g(r)=r , and that S=∣g′(r)∣<1 . Then Fixed-Point Iteration converges linearly with rate S to the fixed point r for initial guesses sufficiently close to r .
Convergent Types
Locally convergent: An iterative method is called locally convergent to r if the method converges to r for initial guesses sufficiently close to r
Globally convergent: For any x∈I, the method converges to r for any initial guesses of x.
The Bisection Method is guaranteed to converge linearly. Fixed-Point Iteration is only locally convergent, and when it converges it is linearly convergent. Both methods require one function evaluation per step. The bisection cuts uncertainty by 1/2 for each step, compared with approximately S=∣g′(r)∣ for FPI. Therefore, Fixed-Point Iteration may be faster or slower than bisection, depending on whether S is smaller or larger than 1/2. Newton’s Method, a particularly refined version of FPI, where S is designed to be zero
Cases when FPI works
Function f(x) is continue in [a,b]
∀x∈[a,b],a⩽f(x)⩽b , then there exists fixed-point.
Based on 1, if ∃L∈(0,1),s.t.∣f(x1)−f(x2)⩽L(x1−x2)∣∀x1,x2∈[a,b]
If f(x)∈C2(R) is a monotonically increasing convex function, and f(r)=0 , then r is the only root and for ∀x0∈R , Newton’s method will converges to r .
Convergence Speed
Based on the Rate of convergence - Wikipedia, the order of convergence2 and the rate of convergence of a convergent sequence are quantities that represent how quickly the sequence approaches its limit. A sequence (xn) that converges to x∗ is said to have order of convergence q≥1 and the rate of convergence μ if :
n→∞lim∣xn−x∗∣q∣xn+1−x∗∣=μ
The rate of convergence μ is also called the asymptotic error constant.
q=1 is called linear convergence if, and the sequence is said to _converge Q-linearly to L.
q=2 is called quadratic convergence.
q=3 is called cubic convergence.
Etc.
Empirically, if the function g∈Cp(a,b) is convergent to x∗ ,and its p
differential g(p)(x∗)=0 , while g(q)(x∗)=0,q=1,2,⋯,p−1 then it’s said that xi+1=g(xi) has p order convergence speed, and it’s local convergent. For FPI, we have:
i→∞limeipei+1=p!∣g(p)(x∗)∣
Let f∈C2[a,b] and f(r)=0. If f′(r)=0, then Newton’s Method is locally and quadratically convergent to r. The error ei at step i satisfies
i→∞limei2ei+1=2f′(r)f′′(r)
If f∈C(m+1)[a,b] , which contains a root r of multiplicity m>1 , then Newton’s Method’s error ei satisfies
t→∞limeiei+1=mm−1
converges locally and linearly to r .
If f∈C(m+1)[a,b] , which contains a root r of multiplicity m>1 , and f(m+1)(r)=0 , then Modified Newton’s Method
The way the QR algorithm finds eigenvalues of a matrix A is to locate a similar matrix whose eigenvalues are obvious. An example of the latter is real Schur form.
Real Schur form: A matrix T has real Schur form if it is upper triangular, except possibly for 2×2 blocks on the main diagonal. For example, a matrix of the form
xxxxxxxxxxxxx
Has real Schur form.
Theorems about QR algorithm
Let’s consider the matrix A:
A=210131012
To find the eigenvalues of A using the QR algorithm, we follow these steps:
Start by initializing A0 as the given matrix A.
Compute the QR decomposition of A0. Let Q0 and R0 be the resulting orthogonal and upper triangular matrices, respectively.
Update A1=R0⋅Q0 .
Repeat steps 2 and 3 until convergence or a desired number of iterations. Let Ak be the matrix at the k-th iteration.
The eigenvalues of A are the diagonal elements of the final upper triangular matrix Ak.
Let’s go through the steps for our example:
Step 1:
A0=210131012
Step 2:
Compute the QR decomposition of A0:
A0=Q0⋅R0
Step 3:
Update A1=R0⋅Q0
Step 4 :
Repeat steps 2 and 3 until convergence or a desired number of iterations.
After a few iterations, we obtain the following matrices:
Are the Newton–Cotes formulas optimal for their degree of precision, or can more powerful formulas be developed? In particular, if the requirement that evaluation points be evenly spaced is relaxed, are there better methods?
We pick out the most famous one to discuss in this section. Gaussian Quadrature has degree of precision 2n+1 when n+1 points are used, double that of Newton–Cotes.