Intereting Posts

Iterated Integrals and Riemann-Liouville (Fractional) Derivatives
Fubini's Theorem for Infinite series
Arrange $n$ people so that some people are never together.
Euclidean Algorithm help!
Evaluate the following integral $\int_{0}^{10}\sqrt{-175e^{-t/4}+400}dt$
Proof That $\mathbb{R} \setminus \mathbb{Q}$ Is Not an $F_{\sigma}$ Set
How to solve an overdetermined system of point mappings via rotation and translation
What is the minimum value of $(\tan^2)(A/2)+(\tan^2)(B/2)+(\tan^2)(C/2)$, where $A$, $B$ and $C$ are angles of a triangle
Does the ternary dot product have any geometric significance?
How does one compute the sign of a permutation?
Recomputing arc center
Is there a nodeless graph?
Combinatorial Proof of Multinomial Theorem – Without Induction or Binomial Theorem
how to prove $f^{-1}(B_1 \cap B_2) = f^{-1}(B_1) \cap f^{-1}(B_2)$
Prove that – for every positive $x \in \mathbb{Q}$, there exists positive $y \in \mathbb{Q}$ for which $y \lt x$

Prove $\sin(\pi/2)=1$ using the Taylor series definition of $\sin x$,

$$\sin x=x-\frac{x^3}{3!}+\frac{x^5}{5!}-\cdots$$

It seems rather messy to substitute in $\pi/2$ for $x$. So we have

$$\sin(\pi/2)=\sum_{n=0}^{\infty} \frac{(-1)^n(\pi/2)^{2n+1}}{(2n+1)!}.$$

I’m not too sure where to go from here. Any help would be appreciated! Thanks!

- Integral of the derivative of a function of bounded variation
- Computing $\lim\limits_{n\to+\infty}n\int_{0}^{\pi/2}xf(x)\cos ^n xdx$
- Find the value of $\int^1_{-1} x \ln(1^x +2^x +3^x +6^x)\,dx $
- using the same symbol for dependent variable and function?
- If $\lim_{n\to \infty}a_n = a\in \mathbb{R}$ . Prove that $\limsup_{n\to \infty}a_n x_n=a\limsup_{n\to \infty}x_n$ .
- When $f(x+1)-f(x)=f'(x)$, what are the solutions for $f(x)$?

- Taylor's Theorem with Peano's Form of Remainder
- Minimization of $\sum \frac{1}{n_k}\ln n_k >1 $ subject to $\sum \frac{1}{n_k}\simeq 1$
- The simple roots of a polynomial are smooth functions with respect to the coefficients of the polynomial?
- How to prove the quotient rule?
- What does the derivative of area with respect to length signify?
- Show that $a_n<\sqrt{2}$ for every $n\in\mathbb{N}$.
- Can we make $\tan(x)$ arbitrarily close to an integer when $x\in \mathbb{Z}$?
- When is an elliptic integral expressible in terms of elementary functions?
- Existence of mixed partials in Clairaut's theorem.
- Can the limit of a product exist if neither of its factors exist?

This is not really an answer, or at least not a simple one. In his book *Differential and Integral Calculus*, Edmund Landau defines $\sin x$ using the standard power series mentioned in the post. Then in the famous Landau style (very very carefully, full detail) he defines the cosine function, and establishes the familiar basic properties of sine and cosine. He shows that $\cos x=0$ has a least positive solution, and calls that number $\pi/2$. After all that, showing that $\sin(\pi/2)$ is very easy!

There is an English translation of the Landau’s book (Chelsea). Enough of the book is viewable on Google Books to allow reconstructing everything. I have not seen a detailed development of sine and cosine through power series anywhere else, though there must be some. However, Landau’s work is of unparallelled clarity.

This is an elaboration on Eelvex’s answer.

The crux of the issue is that $\sin x$ (as defined by the power series above) is the unique solution of $y'' = -y$ satisfying $y(0) = 0, y'(0) = 1$. Similarly, $\cos x$ (as defined by the derivative of the power series of $\sin x$) is the unique solution of $y'' = -y$ satisfying $y(0) = 1, y'(0) = 0$. Moreover, as Eelvex hints at, for any $c$ the function $\sin (x + c)$ is also a solution to $y'' = -y$, and its initial conditions are $y(0) = \sin c, y'(0) = \cos c$, hence

$$\sin (x + c) = \sin c \cos x + \cos c \sin x.$$

Furthermore, we compute that the derivative of $\cos^2 x + \sin^2 x$ is identically zero, and it is equal to $1$ at $x = 0$, hence is identically equal to $1$ everywhere. It follows that $\sin x$ is bounded. Since $\cos x$ is positive for sufficiently small $x$ by inspection, we have that $\sin x$ is at least initially increasing, and by boundedness it attains a local maximum at some positive real $c_0$. This gives $\cos c_0 = 0$, hence $\sin c_0 = 1$ and

$$\sin (x + c_0) = \cos x.$$

The same argument applies to $\cos x$, giving $\cos (x + c_0) = \sin (x + 2c_0) = – \sin x$, hence $\sin (x + 4c_0) = \sin x$, hence $\sin x$ is a periodic function with period $4c_0$.

The remaining mystery is why $4c_0$ is equal to the circumference of the unit circle. Recall that for a parameterized curve $(x(t), y(t))$ with $0 \le t \le t_0$, the arc-length is

$$\int_0^{t_0} \sqrt{x'(t)^2 + y'(t)^2} \, dt.$$

Letting $x(t) = \cos t, y(t) = \sin t$ we have $x'(t)^2 + y'(t)^2 = 1$. Moreover $(x(t), y(t))$ parameterizes the unit circle, and by looking at what quadrant $(\cos t, \sin t)$ is in for $t$ slightly larger than $0, c_0, 2c_0, 3c_0, 4c_0$ we can conclude that it parameterizes the unit circle exactly once precisely when $t_0 = 4c_0$, from which the identity $4c_0 = 2 \pi$ follows.

Note that direct manipulation of the series was not really used here, although the fact that the series solves $y'' = -y$ was used extensively. This is really the crucial property of the series, so it’s a more useful thing to work from anyway. The problem with directly manipulating the series is that at some point you have to get rid of the $\pi$. Doing this using geometry is much easier than using series manipulation (which seems unnecessarily hard to me; it’s not even clear to me what definition of $\pi$ you could use in this situation that wouldn’t be circular), and is more revealing anyway.

Using that the given series is alternating for small $x$ you have

$\sin(0.7)<0.7<{1\over\sqrt{2}}$ and $\sin(0.8)>0.8-0.512/6>{1\over\sqrt{2}}$. (By estimating the fractions you can verify this without a pocket calculator.) It follows that there is an $\alpha\in\ ]0.7,0.8[\ $ with $\sin\alpha=\cos\alpha={1\over\sqrt{2}}$, whence $\sin(2\alpha)=1$. Now give $2\alpha$ the name ${\pi\over2}$.

[ A sketch: ]

Let $f(x) = x – \frac{x^3}{3!} + \frac{x^5}{5!} + \cdots$. You can easily show that

$$\begin{eqnarray}

f(x + c) & = & f(c) + f'(c)x + f''(c)\frac{x^2}{2!} + \cdots \quad(1) \\

f''(x) & = & -f(x) \quad(2)\\

f'(x) & = & 1 – \frac{x^2}{2!} + \frac{x^4}{4!} + \cdots \quad(3)

\end{eqnarray}$$

Then it is straight forward to prove that for any $c$: $$f(x+c) = f'(c)f(x) + f(c)f'(x) \quad(4)$$

We can also prove that there is a $b$ such that $f'(b) = 0$. Then $(1),(4) \Rightarrow f(b) = 1$ and so $$f(x+b) = f'(x) \quad(5)$$

Going a little further, $(2),(5) \Rightarrow f(x+4b) = f(x)$ so $b = \frac{1}{4}\mathrm{period}$.

But $f^2(x) + f'^2(x) = 1$ (all terms of expansion except the first of $f'(x)$ cancel out) so the period of $f$ is $2\pi$, $$b = \frac{\pi}{2}$$

use identities for cos(a+b) and sin(a+b) – which can be derived (Landau as mentioned above). Let a=pi/2 and b= -x

Then apply cos^2 +sin^2 = 1 (can be proved by derivative) twice to get sin(pi/2)=1.

this is off- topic, but may be ok as a proof.

[edit: This is a poor answer – it takes for granted that we know sin(pi/2) = 1]

Instead of considering the series about 0, consider it about $ \dfrac{\pi}{2}$. Thus we get the following:

$$ sin(x) = 1 – \dfrac{1}{2} (x – \dfrac{\pi}{2} )^2 + …$$

But now when you plug in $ \dfrac{\pi}{2}$…

How’s that?

- What is a negative number?
- Does $\frac{(30n)!n!}{(15n)!(10n)!(6n)!}$ count something?
- Is this argument valid? (Number Theory)
- List of interesting integrals for early calculus students
- Prove that the complex expression is real
- How to show that we can always choose a smaller number?
- Show that the equation $x^2+y^2+z^2= (x-y)(y-z)(z-x)$ has infinitely many solutions in integers $x, y, z$.
- Connectedness of points with both rational or irrational coordinates in the plane?
- How to show that $L^p$ spaces are nested?
- Why does this process, when iterated, tend towards a certain number? (the golden ratio?)
- A question regarding the hitting time formula in brownian motion
- How can I characterize the type of solution vector that comes out of a matrix?
- Academic reference concerning Minkowski's question mark function
- Derivation of Riemann Stieltjes integral with floor function
- Euler-Lagrange equation