Definition:
A function \(f\) has a fixed point
\(\bar{x}\) if \(f(\bar{x})=\bar{x}\).
Such a situation is of interest
e.g. when dealing with equations that cannot be solved analytically. As illustrating examples, consider \(x-\cos
x = 0\) and \(e^{2x} - 2x - 2 = 0\), which can be written as \(\cos x = x\) and \(\frac{1}{2}e^{2x}
- 1 = x\), i.e. they are both of the type \(f(x) = x\).Sometimes it is not possible to directly rewrite
a given equation as \(f(x) = x\) but merely as \(g(x)=h(x)\). However, if one of these auxiliary
functions (\(h\), say) is analytically invertible, then \(h^{-1}(g(x))=x\) and one can proceed as
described. From a graphical representation one can easily see that the first equation has one solution and the second equation
has two solutions. How can these be determined numerically?
The name ”fixed point” is related to the fact that for values \(x\) other than \(\bar{x}\) and belonging to a certain interval containing \(\bar{x}\), \(f(x)\) will deviate from \(x\). (One needs to consider such an interval containing \(\bar{x}\) since \(f\) might have several fixed points \(\bar{x}_k\); compare the above example.) On the other hand, one might find a fixed point of \(f\) by trying several \(x\) values and compare them with \(f(x)\), thereby noticing in which direction to proceed. Then, if starting from a value \(x_0\) which is already close to \(\bar{x}\), can one expect to obtain as \(x_1 = f(x_0)\) a value which lies even closer to \(\bar{x}\)?
This question is answered by the famous Banach fixed-point theorem which is of very general nature. For a real-valued function \(f\) that is contractive on \([a,b]\), that is, which maps the interval \([a,b]\) onto itself (i.e., for all \(x \in [a,b]\) also \(f(x) \in [a,b]\)) in such a way that any two points of \([a,b]\) are mapped closer together (more precise: for all \(x_i, x_j \in [a,b]\) one has that
| \begin{equation*} \label{eq:contract} |f(x_i)-f(x_j)| \leq q |x_i - x_j|,\ {\text{with}}\ 0 \leq q \le 1, \end{equation*} | (3.1) |
this theorem shows (i) that \(f\) has exactly one fixed point \(\bar{x}\) in \([a,b]\) and (ii) that this fixed point is reached as limiting value of the iterative scheme \(x_{n+1}=f(x_n)\), which is convergent for any starting point \(x_0\) (i.e., \(\bar{x}=\lim_{n\rightarrow\infty}x_n\), \(x_0 \in [a,b]\) arbitrary). Moreover, for the maximum absolute error of any approximate value \(x_n\) it holds that
| \begin{equation*} \label{eq:itererror} |x_n-\bar{x}| \leq \frac{q}{1-q} |x_n - x_{n-1}| = \frac{q^n}{1-q} |x_1 - x_0|\,. \end{equation*} | (3.2) |
If this function \({f}\) is continuously differentiable in \(\left[a, b\right]\), one has according to the mean-value theorem that, for some suitable \(\xi \in \left[a, b\right]\),
| \begin{equation*} |f(x_i)-f(x_j)| \leq |f'(\xi)| |x_i - x_j|\,. \end{equation*} | (3.3) |
Comparing this with Eq. (3.1) it follows that \(f\) is contractive—and, accordingly, has a fixed point in \([a,b]\) which can be found iteratively—if
| \begin{equation*} |f'(x)| \leq q \le 1\ \text{for all}\ x \in [a,b]\,. \end{equation*} | (3.4) |
Example:
Consider, as above, \(f(x)=\frac{1}{2}e^{2x}-1\) and \([a,b]=[-1,-\frac{1}{2}]\). Then,
| \begin{equation*} f'(x)=e^{2x} \leq e^{-1} \le 1\ \text{for all}\ x \in [-1,\textstyle-\frac{1}{2}]\,. \end{equation*} | (3.5) |
Therefore, the iterative scheme \(x_{n+1}=f(x_n)\) converges, \(q = e^{-1}\), and since \(|x_1 - x_0| \leq \frac{1}{2}\) (length of the interval \([-1,-\frac{1}{2}]\)) one has from Eq. (3.2) that
| \begin{equation*} |x_n-\bar{x}| \leq \frac{e^{-n}}{1-e^{-1}}\ \frac{1}{2}\,. \end{equation*} | (3.6) |
This means that after seven steps of the iteration (\(n=7\)), the maximum absolute
error is \(|x_n-\bar{x}| \leq \frac{1}{2}\frac{e^{-7}}{1-e^{-1}} \approx 0.0007\), i.e. at least the
first three digits after the unit position are significant. (Result: \(x_7 = -0.9207\).)
On the other hand, for \([a,b]=[\frac{1}{2},1]\) (where the other fixed point of \(f\) is expected)
one has that \(f'(x)=e^{2x} \geq e \gt 1\) for all \(x \in [\frac{1}{2},1]\). Therefore, the second
solution cannot be found by this iterative scheme. However, there are two other possibilities to determine this missing
solution:
Instead of solving the given problem \(f(x)=x\), the function \(f\) can be inverted to give \(x = f^{-1}(y)\), and the corresponding problem \(f^{-1}(y) = y\) can be solved (which is helpful since obviously \(\bar{y}=\bar{x}\)). This is possible because one has that \((f^{-1})'(y) = f'(x)^{-1} \le 1\) where \(f'(x)\gt 1\).
Instead of solving the problem \(f(x)=x\), a new function \(g\) is defined, \(g(x) = f(x) - x\), and the zeros of \(g\) are determined. The corresponding methods will be the subject of the following sections.
![]() |
![]() |
![]() |
![]() |
© J. Carstensen (Comp. Math.)