Intereting Posts

Hopf's theorem on CMC surfaces
Currently, what is the largest publicly known prime number such that all prime numbers less than it are known?
Show there exists a sequence of positive real numbers s.t. …
Properties of $||f||_{\infty}$ – the infinity norm
Is this equivalent to Cauchy-Schwarz Inequality?
Is the square pyramid a manifold with corners?
Why is $a$ and $b$ coprime if $a\equiv 1 \pmod{b}$?
Summing over General Functions of Primes and an Application to Prime $\zeta$ Function
Continuity of a function at an isolated point
Subset of $\mathbb{R}^3$ with an element of finite order in its fundamental group
Why do we need to check for more than $\frac{\infty}{\infty}$ or $\frac{0}{0}$ when applying L'Hospital?
Given the sequence of functions, $f_1(x):=\sin(x)$ and $f_{n+1}(x):=\sin(f_n(x))$, why $|f_n(x)|\leq f_n(1)$?
Calculate the sum of infinite series with general term $\dfrac{n^2}{2^n}$.
Mathematical symbol to reference the i-th item in a tuple?
Exchange of two limits of a function

The problem is to used fixed-point iteration method to find an approximation to $\sqrt{3}$. The equation from the book is $f(x) = \dfrac{1}{2}\left(x + \dfrac{3}{x}\right)$. It make senses to me however I realize there are other equations can be used to approximate $\sqrt{3}$ as well, for example $f(x) = 2x – \sqrt{3}$.

What I don’t understand is based on what condition they came up with that equation? Is there a general approach to find a function to approximate a constant? Thank you.

- Relationship between rate of convergence and order of convergence
- What is step by step logic of pinv (pseudoinverse)?
- Determine a stability region?
- Help with using the Runge-Kutta 4th order method on a system of 2 first order ODE's.
- Roots of a finite Fourier series?
- Calculating sin and cos based on combination of exponentiation and power series?

- Fast methods to check linearity of differentials? Generalizing linearity?
- Singularities of an integral
- $\Delta^ny = n!$ , difference operator question.
- Quick way of finding the eigenvalues and eigenvectors of the matrix $A=\operatorname{tridiag}_n(-1,\alpha,-1)$
- Legendre Polynomial Orthogonality and Size
- Runge Kutta with Impulse
- Why does the midpoint method have error $O(h^2)$
- Numerical method for finding the square-root.

Your suggestion to rely on the iteration of the function $f$ defined by $f(x)=2x-\sqrt3$ to compute $\sqrt3$ is not used in practice for at least two reasons. First it presupposes one is able to compute $\sqrt3$, in which case the approximation procedure is useless. Second, $\sqrt3$ is not an attractive point of $f$. This means that for every $x_0\ne\sqrt3$, the sequence $(x_n)$ defined recursively by $x_{n+1}=f(x_n)$ will not converge to $\sqrt3$, and in fact, in the present case $x_n\to+\infty$ or $x_n\to-\infty$ depending on whether $x_0>\sqrt3$ or $x_0<\sqrt3$.

By contrast, using $f(x)=\frac12\left(x+\frac3x\right)$ and starting, for example, from a positive rational $x_0$ produces a sequence $(x_n)$ of rational numbers whose computation does not require to know the value of $\sqrt3$ and which converges to $\sqrt3$. Additionally $(x_n)$ converges to $\sqrt3$ extremely fast since after a while, one step of the algorithm replaces the error by roughly its square, thus the rate of convergence is quadratic, in the sense that $x_{n+1}-\sqrt3$ is of the order of $(x_n-\sqrt3)^2$.

If you try to simulate the 20 first terms for $x_0=1$, for example, you will observe this spectacular speed of convergence which roughly says that each new iteration **doubles up** the number of significant digits of the approximant.

The reason that the iteration $x\leftarrow \tfrac{1}{2}(x+3/x)$ converges so rapidly to $\sqrt{3}$ is because it is derived from Newton’s method. Newton’s method says that to find a root of a function $f(x)$, we pick a starting point and repeat the iteration

$$x \leftarrow x – \frac{f(x)}{f'(x)}$$

until we have achieved our desired accuracy. As long as certain conditions on $f$ are satisfied (its derivative doesn’t vanish, for example) and our initial guess is close enough to the root, this will always converge to the correct answer, and moreover it exhibits quadratic convergence, which means that you approximately double te number of decimal places of accuracy after each step of the iteration.

To approximate the square root of a number $c$, we need a function whose root is the square root of $c$. A simple choice is

$$f(x) = x^2 – c$$

The Newton iteration which finds the root of this equation is

$$x \leftarrow x – \frac{x^2 – c}{2x}$$

which you can rearrange to give

$$x \leftarrow \frac{1}{2} \left( x + \frac{c}{x}\right)$$

which, if you take $c=3$, is exactly the iteration you asked about in your question.

Is there a general approach to find a function to approximate a constant?

For the square root case, sure. Bahman Kalantari has found a “basic family” of iteration functions for finding $\sqrt c$. They take the form

$$x_{k+1}=x_k-(x^2-c)\frac{\Delta_{m-2}(x_k)}{\Delta_{m-1}(x_k)}$$

where

$$\Delta_m(x)=\begin{vmatrix}2x&1&&\\x^2-c&\ddots&\ddots&\\&\ddots&&1\\&&x^2-c&2x\end{vmatrix}$$

is an $m\times m$ tridiagonal Toeplitz determinant, and $\Delta_0(x)=1$. $m$ here is the order of convergence of the iteration formula. $m=2$ for instance is the usual Newton-Raphson iteration (quadratic convergence), and $m=3$ is the Halley iteration (cubic convergence).

(Allow me two asides. First, one can evaluate $\Delta_m$ through a three-term recursion relation. Second, through a special property of Toeplitz determinants, it turns out that $\Delta_m(x)$ admits a (not too practical in this case) closed form: $\Delta_m(x)=(x^2-c)^{m/2}U_m\left(\dfrac{x}{\sqrt{x^2-c}}\right)$, where $U_m(x)$ is the Chebyshev polynomial of the second kind.)

Here are the first few members of the “basic family”:

$$\begin{array}{c|l}m&\\\hline 2&x-\frac{x^2-c}{2x}\\3&x-(x^2-c)\frac{2x}{3x^2+c}\\4&x-(x^2-c)\frac{(3x^2+c)}{4x(x^2+c)}\\5&x-(x^2-c)\frac{4x(x^2+c)}{5x^4+10cx^2+c^2}\end{array}$$

See Kalantari’s paper for the general formula when the function $x^2-c$ is replaced by some other function $f(x)$ whose root is sought.

This looks all rather theoretical and unwieldy, you might say. I’ll put in a few *Mathematica* demonstrations for the first few members of the “basic family”.

Here’s the definition for $\Delta_n(x)$ (here called `d[n]`

):

```
d[0] = 1;
d[1] = 2 x;
d[n_Integer] := Det[SparseArray[{Band[{2, 1}] -> x^2 - c,
Band[{1, 1}] -> 2x, Band[{1, 2}] -> 1}, {n, n}]]
```

Generate the first few members of the “basic family”:

```
Table[x - (x^2 - c) Simplify[d[n]/d[n + 1]], {n, 0, 4}]
{x - (-c + x^2)/(2*x), x - (2*x*(-c + x^2))/(c + 3*x^2),
x - ((-c + x^2)*(c + 3*x^2))/(4*c*x + 4*x^3),
x - (4*x*(-c + x^2)*(c + x^2))/(c^2 + 10*c*x^2 + 5*x^4),
x - ((-c + x^2)*(c^2 + 10*c*x^2 + 5*x^4))/(6*c^2*x + 20*c*x^3 + 6*x^5)}
```

Here’s a demonstration of the Newton-Raphson iteration for approximating $\sqrt{10}$:

```
With[{c = 10}, FixedPointList[Function[x, x - (-c + x^2)/(2*x)],
N[5, 100]] - Sqrt[c]];
N[#2/#1 & @@@ Partition[Log[%], 2, 1], 20]
{-1.7838671038748854863, 3.7925879490759233441, 2.4492570735611599421,
2.1829174906843319475, 2.0837943631820556414, 2.0402123955504591656,
2.01970990649706825, Indeterminate, Indeterminate}
```

Note that successive ratios of the logarithms of the error approach the value $2$, demonstrating the quadratic convergence. (The `Indeterminate`

s in the last two slots come up because the error was effectively zero at that point.)

For comparison, here is the iteration with *quartic* (fourth-order) convergence:

```
With[{c = 10}, FixedPointList[Function[x, x - ((-c + x^2)*
(c + 3*x^2))/(4*c*x + 4*x^3)], N[5, 1000]] - Sqrt[10]];
N[#2/#1 & @@@ Partition[Log[%], 2, 1], 20]
{-6.7654728809088590549, 5.3465261050589774854, 4.2513830895422052676,
4.0591297194912047384, 4.0145670928443785744, Indeterminate,
Indeterminate}
```

Here the convergence was a bit too fast, but the approach to the value $4$ can still be seen.

As others have already said, your first equation is derived from Newton’s method. The idea of it is very simple, and it answers your last question. Basically, to approximate the location of a root of a function, we approximate the function locally by its tangent, find the tangents root instead, and that value is a decent approximation for the location of the desired root. It is a good exercise too use the idea to derive the formula for Newton’s method.

However, we can view the equation produced for square roots in a more elementary light. If $x_n>0$ is a decent approximation for $\sqrt{n}$, then $$ \frac{ x_n + n/x_n }{2} $$ will be even better. Why? If $x_n $ is an approximation for $\sqrt{n}$ slightly too small (big), then $ n/x_n$ is an approximation for $\sqrt{n}$ slightly too big (small) – and then we average them to get somewhere closer in between. I still remember from all the calculations I used to do with this that $19601/13860$ is a good approximation for $\sqrt{2}$.

There are many general approaches. One of the simplest and best is Newton’s Method, which you will find in any calculus text, or by searching the web.

By the way, I think your first formula should be $f(x)=(1/2)(x+(3/x))$, no?

Somewhat off-topic, but…

When I was a kid, we didn’t have calculators yet, so we learned to compute sqrts on paper using the digit-at-a-time method. This cannot compete with Newton-Raphson, of course, but is simple enough to explain to a child. Here’s a Python program that uses this method and gives $\sqrt{3}$ to 10,000 places:

```
import sys
i,j,k,q,d = 3,1,1,1,0
while True:
sys.stdout.write(str(q))
d,i,k,q = d+1, (i-k*q)*100, 20*j, 0
for x in range(10):
q += 1
if (k+q)*q > i:
q -= 1
break
j, k = j*10+q, k+q
if d >= 10000:
break
```

The link below shows to some extent the rationale behind the formula you have provided as well as others – See in particular the “Babylonian method”

Approximating square roots

- How many elements of order $k$ are in $S_n$?
- Equicontinuity if the sequence of derivatives is uniformly bounded.
- Unit sphere in $\mathbb{R}^\infty$ is contractible?
- Completion and algebraic closure commutable
- How is $Ax + By = C$ the equation of a straight line?
- Why is Monotone Convergence Theorem restricted to a nonnegative function sequence?
- Equivalence relations on classes instead of sets
- Perhaps a Pell type equation
- Show that if $\lim_{x\to a}f'(x) = A$ then $f'(a)$ exists and equals A
- pdf of a quotient of uniform random variables
- Sum of real numbers that multiply to 1
- Can an element in a Noetherian domain have arbitrarily long factorizations?
- Weak and strong convergence of sequence of linear functionals
- Covariance between previous and next occurrence.
- Show that $n$ does not divide $2^n – 1$ where $n$ is an integer greater than $1$?