Intereting Posts

Transforming an integral equation using taylor series
How is the sequence 1, 1.4, 1.41, 1.414 generated?
Do the Möbius function, totient function, sum of divisors and number of divisors uniquely specify a number?
Intersecting maximal ideals of $k$ with $k$
“Standard” ways of telling if an irreducible quartic polynomial has Galois group C_4?
Does Seperable + First Countable + Sigma-Locally Finite Basis Imply Second Countable?
Elements as a product of unit and power of element in UFD
Cellular Boundary Formula
Show that any solution of second order differential equation has atmost a countable number of zeroes $?$
Is expectation Riemann-/Lebesgue–Stieltjes integral?
Unbiased estimator of a uniform distribution
Compute $\sum_{k=0}^{\infty}\frac{1}{2^{k!}}$
Proof that the product of two differentiable functions is also differentiable
Probability that no two consecutive heads occur?
Proof by induction for golden ratio and Fibonacci sequence

The problem is to used fixed-point iteration method to find an approximation to $\sqrt{3}$. The equation from the book is $f(x) = \dfrac{1}{2}\left(x + \dfrac{3}{x}\right)$. It make senses to me however I realize there are other equations can be used to approximate $\sqrt{3}$ as well, for example $f(x) = 2x – \sqrt{3}$.

What I don’t understand is based on what condition they came up with that equation? Is there a general approach to find a function to approximate a constant? Thank you.

- Minimum of the Gamma Function $\Gamma (x)$ for $x>0$. How to find $x_{\min}$?
- Runge-Kutta methods: step size greater than 1
- system of First-Order ODES
- Numerical solution to a system of second order differential equations
- Solving a ODE from a diffeomorphism numerically
- An alternative way to calculate $\log(x)$
- Bernstein polynomial looks like this: $B_i^n={{n}\choose{i}}x^i(1-x)^{n-i}$.Find it's $r$'th derivative.
- How to calculate APR using Newton Raphson
- Proof of a lower bound of the norm of an arbitrary monic polynomial
- Non-linear SDE: how to?

Your suggestion to rely on the iteration of the function $f$ defined by $f(x)=2x-\sqrt3$ to compute $\sqrt3$ is not used in practice for at least two reasons. First it presupposes one is able to compute $\sqrt3$, in which case the approximation procedure is useless. Second, $\sqrt3$ is not an attractive point of $f$. This means that for every $x_0\ne\sqrt3$, the sequence $(x_n)$ defined recursively by $x_{n+1}=f(x_n)$ will not converge to $\sqrt3$, and in fact, in the present case $x_n\to+\infty$ or $x_n\to-\infty$ depending on whether $x_0>\sqrt3$ or $x_0<\sqrt3$.

By contrast, using $f(x)=\frac12\left(x+\frac3x\right)$ and starting, for example, from a positive rational $x_0$ produces a sequence $(x_n)$ of rational numbers whose computation does not require to know the value of $\sqrt3$ and which converges to $\sqrt3$. Additionally $(x_n)$ converges to $\sqrt3$ extremely fast since after a while, one step of the algorithm replaces the error by roughly its square, thus the rate of convergence is quadratic, in the sense that $x_{n+1}-\sqrt3$ is of the order of $(x_n-\sqrt3)^2$.

If you try to simulate the 20 first terms for $x_0=1$, for example, you will observe this spectacular speed of convergence which roughly says that each new iteration **doubles up** the number of significant digits of the approximant.

The reason that the iteration $x\leftarrow \tfrac{1}{2}(x+3/x)$ converges so rapidly to $\sqrt{3}$ is because it is derived from Newton’s method. Newton’s method says that to find a root of a function $f(x)$, we pick a starting point and repeat the iteration

$$x \leftarrow x – \frac{f(x)}{f'(x)}$$

until we have achieved our desired accuracy. As long as certain conditions on $f$ are satisfied (its derivative doesn’t vanish, for example) and our initial guess is close enough to the root, this will always converge to the correct answer, and moreover it exhibits quadratic convergence, which means that you approximately double te number of decimal places of accuracy after each step of the iteration.

To approximate the square root of a number $c$, we need a function whose root is the square root of $c$. A simple choice is

$$f(x) = x^2 – c$$

The Newton iteration which finds the root of this equation is

$$x \leftarrow x – \frac{x^2 – c}{2x}$$

which you can rearrange to give

$$x \leftarrow \frac{1}{2} \left( x + \frac{c}{x}\right)$$

which, if you take $c=3$, is exactly the iteration you asked about in your question.

Is there a general approach to find a function to approximate a constant?

For the square root case, sure. Bahman Kalantari has found a “basic family” of iteration functions for finding $\sqrt c$. They take the form

$$x_{k+1}=x_k-(x^2-c)\frac{\Delta_{m-2}(x_k)}{\Delta_{m-1}(x_k)}$$

where

$$\Delta_m(x)=\begin{vmatrix}2x&1&&\\x^2-c&\ddots&\ddots&\\&\ddots&&1\\&&x^2-c&2x\end{vmatrix}$$

is an $m\times m$ tridiagonal Toeplitz determinant, and $\Delta_0(x)=1$. $m$ here is the order of convergence of the iteration formula. $m=2$ for instance is the usual Newton-Raphson iteration (quadratic convergence), and $m=3$ is the Halley iteration (cubic convergence).

(Allow me two asides. First, one can evaluate $\Delta_m$ through a three-term recursion relation. Second, through a special property of Toeplitz determinants, it turns out that $\Delta_m(x)$ admits a (not too practical in this case) closed form: $\Delta_m(x)=(x^2-c)^{m/2}U_m\left(\dfrac{x}{\sqrt{x^2-c}}\right)$, where $U_m(x)$ is the Chebyshev polynomial of the second kind.)

Here are the first few members of the “basic family”:

$$\begin{array}{c|l}m&\\\hline 2&x-\frac{x^2-c}{2x}\\3&x-(x^2-c)\frac{2x}{3x^2+c}\\4&x-(x^2-c)\frac{(3x^2+c)}{4x(x^2+c)}\\5&x-(x^2-c)\frac{4x(x^2+c)}{5x^4+10cx^2+c^2}\end{array}$$

See Kalantari’s paper for the general formula when the function $x^2-c$ is replaced by some other function $f(x)$ whose root is sought.

This looks all rather theoretical and unwieldy, you might say. I’ll put in a few *Mathematica* demonstrations for the first few members of the “basic family”.

Here’s the definition for $\Delta_n(x)$ (here called `d[n]`

):

```
d[0] = 1;
d[1] = 2 x;
d[n_Integer] := Det[SparseArray[{Band[{2, 1}] -> x^2 - c,
Band[{1, 1}] -> 2x, Band[{1, 2}] -> 1}, {n, n}]]
```

Generate the first few members of the “basic family”:

```
Table[x - (x^2 - c) Simplify[d[n]/d[n + 1]], {n, 0, 4}]
{x - (-c + x^2)/(2*x), x - (2*x*(-c + x^2))/(c + 3*x^2),
x - ((-c + x^2)*(c + 3*x^2))/(4*c*x + 4*x^3),
x - (4*x*(-c + x^2)*(c + x^2))/(c^2 + 10*c*x^2 + 5*x^4),
x - ((-c + x^2)*(c^2 + 10*c*x^2 + 5*x^4))/(6*c^2*x + 20*c*x^3 + 6*x^5)}
```

Here’s a demonstration of the Newton-Raphson iteration for approximating $\sqrt{10}$:

```
With[{c = 10}, FixedPointList[Function[x, x - (-c + x^2)/(2*x)],
N[5, 100]] - Sqrt[c]];
N[#2/#1 & @@@ Partition[Log[%], 2, 1], 20]
{-1.7838671038748854863, 3.7925879490759233441, 2.4492570735611599421,
2.1829174906843319475, 2.0837943631820556414, 2.0402123955504591656,
2.01970990649706825, Indeterminate, Indeterminate}
```

Note that successive ratios of the logarithms of the error approach the value $2$, demonstrating the quadratic convergence. (The `Indeterminate`

s in the last two slots come up because the error was effectively zero at that point.)

For comparison, here is the iteration with *quartic* (fourth-order) convergence:

```
With[{c = 10}, FixedPointList[Function[x, x - ((-c + x^2)*
(c + 3*x^2))/(4*c*x + 4*x^3)], N[5, 1000]] - Sqrt[10]];
N[#2/#1 & @@@ Partition[Log[%], 2, 1], 20]
{-6.7654728809088590549, 5.3465261050589774854, 4.2513830895422052676,
4.0591297194912047384, 4.0145670928443785744, Indeterminate,
Indeterminate}
```

Here the convergence was a bit too fast, but the approach to the value $4$ can still be seen.

As others have already said, your first equation is derived from Newton’s method. The idea of it is very simple, and it answers your last question. Basically, to approximate the location of a root of a function, we approximate the function locally by its tangent, find the tangents root instead, and that value is a decent approximation for the location of the desired root. It is a good exercise too use the idea to derive the formula for Newton’s method.

However, we can view the equation produced for square roots in a more elementary light. If $x_n>0$ is a decent approximation for $\sqrt{n}$, then $$ \frac{ x_n + n/x_n }{2} $$ will be even better. Why? If $x_n $ is an approximation for $\sqrt{n}$ slightly too small (big), then $ n/x_n$ is an approximation for $\sqrt{n}$ slightly too big (small) – and then we average them to get somewhere closer in between. I still remember from all the calculations I used to do with this that $19601/13860$ is a good approximation for $\sqrt{2}$.

There are many general approaches. One of the simplest and best is Newton’s Method, which you will find in any calculus text, or by searching the web.

By the way, I think your first formula should be $f(x)=(1/2)(x+(3/x))$, no?

Somewhat off-topic, but…

When I was a kid, we didn’t have calculators yet, so we learned to compute sqrts on paper using the digit-at-a-time method. This cannot compete with Newton-Raphson, of course, but is simple enough to explain to a child. Here’s a Python program that uses this method and gives $\sqrt{3}$ to 10,000 places:

```
import sys
i,j,k,q,d = 3,1,1,1,0
while True:
sys.stdout.write(str(q))
d,i,k,q = d+1, (i-k*q)*100, 20*j, 0
for x in range(10):
q += 1
if (k+q)*q > i:
q -= 1
break
j, k = j*10+q, k+q
if d >= 10000:
break
```

The link below shows to some extent the rationale behind the formula you have provided as well as others – See in particular the “Babylonian method”

Approximating square roots

- Conditions for the equivalence of $\mathbf A^T \mathbf A \mathbf x = \mathbf A^T \mathbf b$ and $\mathbf A \mathbf x = \mathbf b$
- $f: X \to Y$ is a smooth map and $\omega$ is a $p$-form on $Y$, what is $\omega$?
- Approximation of fiber bundle isomorphisms
- Proof of Wolstenholme's theorem
- Any group of index 2 is normal
- Show that for any $g \in L_{p'}(E)$, where $p'$ is the conjugate of $p$, $\lim_{k \rightarrow \infty}\int_Ef_k(x)g(x)dx = \int_Ef(x)g(x)dx$
- Showing the exponential and logarithmic functions are unique in satisfying their properties
- Continued fraction for $\frac{1}{e-2}$
- Isoperimetric problem in the calculus of variations
- Rectangle inscribed in semicircle, find perimeter and more
- What is shortcut to this contest algebra problem about polynomial?
- Root Calculation by Hand
- Centralizer of a specific permutation
- Show that $\log(1+y) \approx y- \frac {y^2}2 + \cdots$ without Taylor Series
- Counting squarefree numbers which have $k$ prime factors?