Continue our journey in Chapter1-Fundanentals, the notes are based on the textbook1

The Bisection Method

Given initial interval such that

def bisection(f, a, b, tol=1e-6, maxiter=100):
    for i in range(maxiter):
        c = (a + b) / 2
        if f(c) == 0 or abs(b - a) < tol:
            return c
        if f(a) * f(c) < 0:
            b = c
        else:
            a = c
    return c

the approximate error is ( is the initial error)


Fixed-point Iteration

  • fixed point: the real number is a fixed point of the function if
def fixed_point_iteration(f, x0, tol=1e-6, maxiter=100):
    x = x0
    for i in range(maxiter):
        x1 = f(x)
        if abs(x1 - x) < tol:
            return x1
        x = x1
    return x

The FPI function can be different.

For equation , there exists different FPI functions, along with different convergence speed.


different FPI functions, different convergence speed.
  • Linear convergence: Let denote the error at step of an iterative method. If

the method is said to obey linear convergence with rate S

Assume that is continuously differentiable, that , and that . Then Fixed-Point Iteration converges linearly with rate S to the fixed point for initial guesses sufficiently close to .

Convergent Types

  • Locally convergent: An iterative method is called locally convergent to if the method converges to r for initial guesses sufficiently close to

  • Globally convergent: For any , the method converges to for any initial guesses of .

The Bisection Method is guaranteed to converge linearly. Fixed-Point Iteration is only locally convergent, and when it converges it is linearly convergent. Both methods require one function evaluation per step. The bisection cuts uncertainty by for each step, compared with approximately for FPI. Therefore, Fixed-Point Iteration may be faster or slower than bisection, depending on whether S is smaller or larger than 1/2. Newton’s Method, a particularly refined version of FPI, where is designed to be zero

Cases when FPI works

  1. Function is continue in
    1. , then there exists fixed-point.
    2. Based on 1, if
      • exists only one global fixed-point.
      • Error satisfies

Local Convergence Theorem: Let be the fixed-point, if is continue in and .Then the FPI is locally convergent.


Limits of Accuracy

  • Forward and backward error: Assume that is a function and that is a root, meaning that it satisfies . Assume that is an approximation to . For the root-finding problem, the backward error of the approximation is and the forward error is

Sensitivity of Root-finding

A problem is called sensitive if small errors in the input, in this case the equation to be solved, lead to large errors in the output, or solution.

Assume that the problem is to find a root of , but that a small change is made to the input, where is small. Let be the corresponding change in the root:


Example 1.9
  • Error magnification factor : relative forward error / relative backward error.

Examle of Wilkinson polynomial

The condition number of a problem is defined to be the maximum error magnification over all input changes, or at least all changes of a prescribed type. A problem with high condition number is called ill-conditioned, and a problem with a condition number near 1 is called well-conditioned


Newton’s Method

Newton's Method

x_0 & =\text { initial guess } \ x_{i+1} & =x_i-\frac{f\left(x_i\right)}{f^{\prime}\left(x_i\right)} \text { for } i=0,1,2, \ldots \end{aligned}$$

If is a monotonically increasing convex function, and , then is the only root and for , Newton’s method will converges to .

Convergence Speed

Based on the Rate of convergence - Wikipedia, the order of convergence2 and the rate of convergence of a convergent sequence are quantities that represent how quickly the sequence approaches its limit. A sequence that converges to is said to have order of convergence and the rate of convergence if :

The rate of convergence is also called the asymptotic error constant.

  •  is called linear convergence if, and the sequence is said to _converge Q-linearly to .
  •  is called quadratic convergence.
  • is called cubic convergence.
  • Etc.

Empirically, if the function is convergent to ,and its differential , while then it’s said that has order convergence speed, and it’s local convergent. For FPI, we have:

Let and . If , then Newton’s Method is locally and quadratically convergent to . The error at step satisfies

If , which contains a root of multiplicity , then Newton’s Method’s error satisfies

converges locally and linearly to .

If , which contains a root of multiplicity , and , then Modified Newton’s Method

error satisfies

converges locally and quadratically to .


Examples of error approximation

Discuss When Newton’s Method Fails

  1. The initial guess is not in locally convergence domain
  2. Its differential equals to 0
  3. The root function contains negative result.

An examle when the Newton's method fails

find the root of with

Methods without Derivatives

Secant Method and Variants

Secant method

the approximate error relationship:

Three Generalizations of the Secant Method

False Position

Given interval such that for else end end

Newton’s method in solving nonlinear equation

For equations like

Let’s Define , and Jacobi Matrix:

Then here comes the Newton’s method:

Example of Newton's Method in solving nonlinear equations

  • 例: 求解方程组

Jacobi i矩阵

牛顿迭代式 : 第一步迭代计算为

Footnotes

  1. Sauer, Timothy. Numerical analysis. Addison-Wesley Publishing Company, 2011. ‌

  2. Wikipedia Contributors (2023) Rate of convergence. In: Wikipedia. https://en.wikipedia.org/wiki/Rate_of_convergence. Accessed 13 Mar 2023