Intereting Posts

Rank of product of a matrix and its transpose
Another limit task, x over ln x, L'Hôpital won't do it?
Identify and sketch the quadric surface?
Evaluate the integral $\int_0^{\infty} \left(\frac{\log x \arctan x}{x}\right)^2 \ dx$
Euler's identity in matrix form
Most inspirational mathematical books
where does the term “integral domain” come from?
Weierstrass M-Test
How do I prove $\sum_{n \leq x} \frac{\mu (n)}{n} \log^2{\frac{x}{n}}=2\log{x}+O(1)$? Can I use Abel summation?
Why is 1 not a prime number?
How find the value of the $x+y$
Flat algebras and tensor product
Integrate $\int\sqrt{x+\sqrt{x^{2}+2}} dx$ .
Solving $e^x = 6x$ for $x$ without a graph.
Suppose $F$ is a finite field of characteristic $p$. Prove $\exists u \in F$ such that $F = \mathbb{F}_{p}(u)$.

I believe my trouble is that the identity, $e^{i \pi} = -1$, comes down to the definition of the exponentiation of $i$, which seems rather obscure to me.

This is my current understanding of exponentiation by $i$: Express a number as $r e^{i \theta}$. Then $(r e^{i \theta})^i = r^i e^{- \theta}$. Furthermore, $r$, a real number, can be expressed as $e^{\theta_2}$. Thus the resulting number is $e^{- \theta} e^{i \theta_2}$. While I am satisfied with the verity of this statement (I am aware of proofs of $e^{i \theta} = \cos \theta + i \sin \theta$), I just don’t see *why* any of it is true.

**I do not know of any intuitive explanation of why taking a number to the power of $\sqrt{-1}$ would produce such a result.**

- What do we lose passing from the reals to the complex numbers?
- How can one intuitively think about quaternions?
- Does the limit of $e^{-1/z^4}$ as $z\to 0$ exist?
- For a polynomial $p(z)$ with real coefficients if $z$ is a solution then so is $\bar{z}$
- Find all the values of $w$ ∈ C that satisfy the equation.
- Sums of complex numbers - proof in Rudin's book

Is there any, or do I just have to accept the fact that it’s true?

- Do you prove all theorems whilst studying?
- Show that a complex expression is smaller than one
- Integral of series with complex exponentials
- What should be the intuition when working with compactness?
- Ordinals - motivation and rigor at the same time
- Projections on the Riemann Sphere are antipodal
- What's the difference between $\mathbb{R}^2$ and the complex plane?
- Direct proof. Square root function uniformly continuous on $[0, \infty)$ (S.A. pp 119 4.4.8)
- Derivation of the Dirac-Delta function property: $\delta(bt)=\frac{\delta(t)}{\mid b \mid}$
- Organization of the Learning Process

Here is another perspective, which perhaps shifts where the intrigue should be directed. An important question to consider: How do you define $\pi$? If you take the definition of $\pi$ to be the usual one we first encounter involving circles, then it is fairly remarkable that this geometric constant should have anything to do with the complex function $e^z$, defined in terms of a power series.

To see how remarkable this is, let’s go backwards. Let’s define $\pi$ completely in the context of complex analysis. So for now let’s forget what we know about $\pi$ as it relates to circles and trigonometric functions.

As others have noted, having appropriately defined radius of convergence of a power series, one can show that the series

$$\sum_{n=0}^{\infty}\frac{z^n}{n!}$$

converges everywhere, and thus defines an analytic function on $\mathbb{C}$ which we call $\exp$. Something helpful to notice right away is that $\overline{\exp(z)}=\exp{\overline{z}}$. Also, power series manipulation will show that $\exp(y+z)=\exp(y)\exp(z)$, so that for $a,b\in\mathbb{R}$

$$|\exp(a+bi)|=\sqrt{\exp(a+bi)\exp(a-bi)}=\sqrt{\exp(2a)}=\exp(a)$$

This shows us that if $|\exp(z)|=1$, then $z$ must be purely imaginary. Now, make the following definitions:

$$\cos(z)=\frac{e^{iz}+e^{-iz}}{2}\qquad\sin(z)=\frac{e^{iz}-e^{-iz}}{2i}$$

Notice it follows that $\exp(iz)=\cos(z)+i\sin(z)$ and $\cos^2(z)+\sin^2(z)=1$. At this point, with such familiar formulas, you may be tempted to just “plug in $\pi$,” but remember, we don’t know what $\pi$ is yet. Even if we did, we certainly don’t know the values of $\cos(\pi)$ or $\sin(\pi)$ because these functions are defined in terms of infinite series.

The next part is the trickiest. Here we show that $\exp(iz)$ is actually periodic, that is, there is a constant $c$ such that $\exp(i(z+c))=\exp(iz)$ for all $z\in\mathbb{C}$. Notice if such a $c$ existed, we may plug in $z=0$ to obtain $\exp(ic)=\exp(0)=1$, so that $c$ must be real (based on our computation of $|\exp(a+bi)|$ above). To find $c$, it can be shown (using the intermediate value theorem) that there is some smallest positive real number $d$ such that $\cos(d)=0$. From this, and the formulas $\exp(iz)=\cos(z)+i\sin(z)$ and $\cos^2(z)+\sin^2(z)=1$ it follows that $\sin(d)=\pm 1$, that $\exp(i\cdot d)=\pm i$, and that $\exp(i\cdot4d)=1$. Hence, our desired period is $c=4d$. Notice along the way we proved that $\exp(i\cdot2d)=-1$.

Now, make the following *definition*:

$$\pi:=2d$$

Well, we’re done. We’ve shown the formula $\exp(i\pi)=-1$, and you’re right, from this perspective, it is rather unremarkable, because all we’ve done is call $\pi$ something we want to make the formula work.

Now, to see why this formula is indeed remarkable, spend a minute thinking about how *the same constant we just defined (using power series, complex analysis, and calculus) also satisfies the following*:

The ratio of the length of the circumference of any circle to the length of its diameter is $\pi$.

Remarkable if you ask me.

The exponential function on the real line $e^{x}$ can be defined by the power series

$$e^{x} = \sum_{n=0}^{\infty} \frac{x^n}{n!}$$

It extends as an analytic function to the complex plane in **precisely one way**

$$e^{z} = \sum_{n=0}^{\infty} \frac{z^n}{n!}$$

This is the definition of the complex exponential, and all of the properties you are observing emerge from this definition.

First of all, you must understand that mathematicians like to abuse notation. So, in truth, $e^z$ with $z \in \mathbb{C}$ is just an analogue (or the generalization) of it real counterpart. In complex analysis one is often interested in the following complex power series

$$f(z) = \sum_{n = 0}^\infty \frac{z^n}{n!}, \quad z \in \mathbb{C}.$$

Because $f(z)$ looks just like

$$e^x = \sum_{n = 0}^\infty \frac{x^n}{n!}, \quad x \in \mathbb{R},$$

we write $f(z) = e^z$. So it is not really $e^\pi$ that is being raised to the power of $i$, it is $f(\pi)$. This is analogous to how some people write

$$1 + 2 + 3 + \cdots = -\frac{1}{12}.$$

They do not mean that the left-hand side actually equals the right-hand side, they mean

$$\zeta(-1) = -\frac{1}{12},$$

where

$$\zeta(s) = 2^s \pi^{s – 1} \sin\left(\frac{\pi s}{2}\right) \Gamma(1 – s) \zeta(1 – s), \quad s \in \mathbb{C} \setminus \{1\},$$

which is the analytic continuation of

$$\zeta(s) = \sum_{n = 1}^\infty \frac{1}{n^s}, \quad \Re[s] > 1.$$

Because we use the same Greek letter for both, some people abuse notation and write

$$1 + 2 + 3 + \cdots = -\frac{1}{12},$$

which is absurd as the series above diverges when $s = -1$. However, if $\Re[s] > 1$, then

$$\sum_{n = 1}^\infty \frac{1}{n^s} = 2^s \pi^{s – 1} \sin\left(\frac{\pi s}{2}\right) \Gamma(1 – s) \zeta(1 – s).$$

Overall, just because we use the same symbol to denote two, possibly distinct, objects, e.g., functions, it does not necessary mean they are identical.

**Edit**: Recall that in $\mathbb{C}$ any complex number $z$ can be written as

$$z = \cos\theta + i\sin\theta,$$

and, in particular, $z = 1$ at $\theta = 0$. Certainly,

$$\frac{dz}{d\theta} = -\sin\theta + i\cos\theta = i(\cos\theta + i\sin\theta) = i \cdot z. \tag{1}$$

This differential equation looks very similar to $y’ = y$, so using the initial condition $z(0) = 1$, we deduce, without actually separating the variables, that $z = e^{i\theta}$. Notice that if we were to separate the variables we would have to introduce the complex logarithm. To avoid this we use the fact that the unique solution to the following initial value problem

$$\frac{dy}{dx} = 1 \cdot y, \quad f(0) = 1, \tag{2}$$

is $e^x$. Again, because $(1)$ and $(2)$ look alike, we traditionally denote the unique solution of $(1)$ by $e^z$. Observe that I deliberately wrote the right-hand side of $(2)$ as $1 \cdot y$ because the imaginary unit $i$ is the complex analogue of $1$, hence the name.

A friend told me there’s an explanation of exponentiation by $i$ in terms “compounding” of rotations in $\mathbb{C}$. Maybe like the way $e$ is tied to compound interest for real numbers. I haven’t got around to reading it through yet, but maybe it’s useful?

He said it seems pretty cool, I was more on the side of “it’s been proved, I’ve followed the proof, so I’ve accepted it” v.s. the intuitive understanding. It’s not always the best way I admit. I would like to read it at least.

This is the link I think he mentioned.

http://betterexplained.com/articles/intuitive-understanding-of-eulers-formula/

Rudin (Complex Analysis) defines $e^z$ in terms of an infinite series expansion:

$e^z= \sum\limits_{n=0}^{\infty}\dfrac{z^n}{n!}$.

Now, for $z=i\theta$ you can easily check that the series above shall give $e^{i\theta}=\cos(\theta)+i\sin(\theta)$. Hence $e^{i\pi}=-1$.

This way of looking at the problem does seem to make it rather trivial, but you have to admit that when Euler found this relation, it struck him as rather unusual and beautiful- mainly because many of the above arguments was not totally rigorous then (I could be wrong here).

Also, even though the proof is easy, I still find this formula beautiful as it connects $e,\pi,i$ and $-1$ in a rather neat way. (Of course I must admit that I find calling it the most beautiful formula of math is a bit of hype.)

In “Elementary Theory of Analytic Functions of One or Several Complex Variables” by Henri Cartan, $\pi$ is *by definition* the smallest positive real number such that $e^{2\pi i} = 1$. Of course, under that definition it follows immediately that $e^{\pi i} = -1$. This is just trivial.

One might suspect that it’s hard to prove that such a number $\pi$ exists at all. The truth is, it’s not that hard. After all, as $t$ travels along the real line, $e^{it}$ travels along the unit circle with constant speed (which is easy to derive directly from the definition of the exponent). So $e^{it}$ is bound to come back to the same point again and again. This intuition can be formalized with relatively small effort.

Another note I’d like to make is that when you take the route as in Cartan’s book, formula $e^{\pi i} = -1$ doesn’t include the number $e$ at all, even though letter “e” is present there. The formula says, by definition, that $\exp(\pi i) = -1$, where $\exp(z) = \sum_{n=0}^\infty \frac{z^n}{n!}$. So it is about the exponential function as a whole, not about number $e$ per se.

The definition of the exponentiation of $i$ seems rather obscure to me. I do not know of any intuitive explanation of why taking a number to the power of $\sqrt{-1}$ would produce such a result.

If $\pi$ is the constant of the circle, defined by $x^2+y^2=r^2$, then *e* is the constant of the hyperbola, since it is the basis of the natural logarithm, which is the integral of $y=\dfrac1x$ , whose graphic, when rotated by $45^\circ$, gives $x^2-y^2=r^2$. The circle represents a constant sum of squares, the hyperbola a constant difference of squares. If one were to replace $y$ by $i\cdot y$ in each of the two equations, one would get the other one. So we should logically expect a mathematical relation tying together these three constants: $\pi$, *e*, and *i*. Indeed, Euler’s identity fits the bill. The same goes for Euler’s formula relating hyperbolic and trigonometric functions by way of the imaginary unit.

You cannot *prove* what the definition of $e^z$ (for complex $z$) is. However if you know what properties you want $e^z$ to have, you can possibly find a definition with those properties, and possibly prove that the definition is unique.

**How do we want to define $e^z$, for $z \in \mathbb{C}$?**

Here’s the simplest defining property I’ve ever seen for it:

$$\frac {d\,e^{ix}}{dx} = ie^{ix}$$

Imagine a function $f(x)$ has the property that $f(0) = 1$ and $f'(x) = if(x)$. You might know from calculus that being given a starting point and a derivative is enough to determine a function.

Multiplying a complex number by $i$ is the same as rotating the complex number by a quarter turn: $(a + bi)i = -b + ai$, the vector$[-b,a]$ is a perpendicular counter clockwise to the vector $[a,b]$. So the relation $f'(x) = if(x)$ indicates that the movement of $f(x)$ is always perpendicular to it’s location. From physics, you might know that this represents circular motion:

Above red vectors are $f(x)$, the location, and the green vectors are $f'(x)$. Since multiplying by $i$ doesn’t change the magnitude of the vector, the speed of rotation is exactly $1$, the radius is $1$, so the frequency is $2\pi$. We can derive all of this from the above derivative property.

As other answers have indicated, you can use the series expansion instead. But for intuition I prefer the point-change approach.

- What is known about nice automorphisms of the Mandelbrot set?
- Applications of conformal mapping
- Importance of construction of polygons
- Proof verification for proving $\forall n \ge 2, 1 + \frac1{2^2} + \frac1{3^2} + \cdots + \frac1{n^2} < 2 − \frac1n$ by induction
- Shall remainder always be positive?
- Normal Subgroups of $SU(n)$
- How two find the matrix of a quadratic form?
- If every five point subset of a metric space can be isometrically embedded in the plane then is it possible for the metric space also?
- Prove if $f(a)<g(a)$ and $f(b)>g(b)$, then there exists $c$ such that $g(c)=f(c)$.
- Is there a way of working with the Zariski topology in terms of convergence/limits?
- Finding circle circumscribed in spherical triangle
- What do Algebra and Calculus mean?
- Limit of $S(n) = \sum_{k=1}^{\infty} \left(1 – \prod_{j=1}^{n-1}\left(1-\frac{j}{2^k}\right)\right)$ – Part II
- A particular cases of second Hardy-Littlewood conjecture
- Gram Matrices Rank