In physics, in the past, complex numbers were used only to remember or simplify formulas and computations. But after the birth of quantum physics, they found that a thing as real as “matter” itself had to be described by complex wave functions and there’s no way to describe it using only real numbers.
In mathematics, in real analysis, there’s examples like the function $f(x)=\frac{1}{1+x^2}$, and why this function does not have the “smoothtness” of the exponential function, polynomials, sine and cosine functions, why it has a radius of convergence equals 1 despite the fact that this function is infinitely differentiable. You can’t see the reality of this function until you see it through the field of complex analysis, where you can observe that $f$ is not that smooth because it has 2 singularities in the complex plane.
I am just asking for examples like this such that when you see it in the narrow “window” of real analysis, you can’t see the “reality” until you view it from the window of complex analysis. I am just starting to self learn complex analysis and I find it more natural than real analysis and it tells you the “truth” behind a lot of things.
Not all real degree $n$ polynomials have $n$ roots (counting multiplicity) because some of the roots are complex. In the real domain a matrix can have no eigenvalues, e.g. 2-dimensional rotation matrix, but any real matrix has a complex eigenvalue. These are manifestations of $\mathbb{C}$ unlike $\mathbb{R}$ being algebraically closed, i.e. every polynomial equation having a solution. In the real domain $\sqrt{x}$ and $\ln{x}$ are only defined for positive $x$ because for negative $x$ the value is a complex number, and it is not unique.
In the real domain exponents and trigonometric functions are completely different functions, but in the complex domain they are related by simple Euler formulas. The same goes for logarithms and inverse trigonometric functions. This is the main reason why identities for hyperbolic functions are almost the same as familiar trigonometric identities. Many definite integrals of functions that do not have elementary antiderivatives can be computed in elementary terms by extending the path of integration to the complex plane and using residues, e.g. $\int_0^\infty\frac{\ln x}{(1+x^2)^2}\,dx=-\frac{\pi}{4}$. More generally, integral and series representations of many real functions can be converted into each other because these functions extend into the complex plane and contour integrals there reduce to sums over residues. The Riemann zeta function is a typical beneficiary. These manifest another advantage of complex analysis over real one. Many commonly used real functions extend to holomorphic functions in the complex plane, and for holomorphic functions calculus tools are much stronger than for smooth ones, which is what real analysis mostly treats them as.
In the real domain ellipses and hyperbolas are different types of curves, but in the complex plane they are related by a rotation of axes, i.e. they are the ‘same’ (more precisely, we are looking at two different projections of the same complex curve). In a similar way spherical and hyperbolic geometries are related by a complex rotation. The Schrödinger equation of quantum mechanics and the heat equation of classical physics are also related by a complex rotation called Wick rotation. Path integral interpretation of quantum mechanical solutions can be made precise using this relation and the Feynman–Kac formula.
Heaviside developed operational calculus for solving ordinary differential equations with constant coefficients by treating time derivative as a ‘variable’ $p$ and writing solutions in terms of symbolic ‘functions’ of it. It turned out that the magic worked because $p$ is in fact a complex variable, and Heaviside’s symbolic solutions can be converted into real ones by taking the inverse Laplace transform, which is a contour integral in the complex plane.
Harmonic functions, solutions to the Laplace equation, have many nice analytic properties like being sums of convergent power series, attaining extrema at the boundary of their domains, being equal at any point to the average of values on any circle centered at it, etc. The underlying reason is that harmonic functions are exactly the real and imaginary parts of holomorphic functions. If the potential of a vector field is a harmonic function $\varphi$ then its flow lines are level curves of another harmonic function $\psi$, exactly the one that makes $\varphi+i\psi$ holomorphic. Solution formulas for the Dirichlet boundary problem for the Laplace equation in some special domains are reflections of the Cauchy integral fomula for holomorphic functions that works in ‘any’ domain.
A notable example of how Complex Analysis reveals something deep about Physics is found in Spectral Theory. There is a type of conservation law where all of the singularities in the finite plane are related to properties of the singularity at $\infty$. For example, if you have a function that is holomorphic with a finite number of poles in the complex plane, then the sum of the residues can be computed by looking at the residue at $\infty$. Extensions of these ideas allow one to prove the completeness of eigenfunctions for an operator $A$ by replacing integrals around singularities of $(\lambda -A)^{-1}$ with one residue at $\infty$ computed as $\lim_{\lambda\rightarrow\infty}\lambda(\lambda-A)^{-1}=I$.
For example, if $A=\frac{1}{i}\frac{d}{dt}$ on $L^{2}[0,2\pi]$, on the domain $\mathcal{D}$ consisting of periodic absolutely continuous functions $f$, then $(\lambda -A)^{-1}$ is defined everywhere except at $\lambda = 0,\pm 1,\pm 2,\pm 3,\cdots$, where it has simple poles with residues
$$
R_{n}f = \frac{1}{2\pi}\int_{0}^{2\pi}f(\theta’)e^{-in\theta’}\,d\theta e^{in\theta}.
$$
These residues are projections onto the eigenspaces associated the values of $\lambda$ where $(\lambda I -A)^{-1}$ is singular. Though the argument is very delicate, one can replace the sum of all these residues with one residue at $\infty$ evaluated as $\lim_{v\rightarrow\pm\infty}(u+iv)(u+iv-A)^{-1}f=f$, which leads to a proof of the $L^{2}$ convergence of the Fourier Series for an arbitrary $f \in L^{2}[0,2\pi]$:
$$
f = \sum_{n=-\infty}^{\infty}\frac{1}{2\pi}\int_{0}^{2\pi}f(\theta’)e^{-in\theta’}\,d\theta’ e^{in\theta}.
$$
This type of argument can be used to prove the convergence of Fourier-type transforms mixed with discrete eigenfunctions, too. And, these arguments where singularities of the resolvent in the finite plane are traded for a single residue at $\infty$ can give pointwise convergence results that go well beyond Functional Analysis and $L^{2}$ theory.
Such methods tie in beautifully with how Dirac viewed an observable as some kind of composite number with lots of possible values determined from the singularities of $\frac{1}{\lambda -A}$, which he then used to generalize the Cauchy integral formula
$$
p(A)=\frac{1}{2\pi i}\oint_{C} p(\lambda)\frac{1}{\lambda -A}\,d\lambda.
$$
In this strange setting of Complex Analysis, the results of Quantum eigenvector expansion are compelling and almost natural, even though the proofs are not so simple.
One example is the two-dimensional electrostatics. The potential in the domain $D$ without electric charges satisfies the Laplace equation $\varphi_{xx}+\varphi_{yy}=0$. From the ‘real’ point of view, its solutions for
two infinite uniformly charged planes
two infinite coaxial uniformly charged cylinders
have nothing to do with each other. However, from the ‘complex’ point of view one is obtained from the other by the conformal transformation $w(z)=e^z$ which maps the strip $a<\Re z<b$ onto the annulus $ e^a<|z|<e^b$.
If a function is analytic on the upper half plane and goes to zero fast enough at infinity then the Kramers-Kronig relations hold. So if $x\rightarrow f(x)=f_1(x)+if_2(x)$ is our function, then
$$f_1(x)=\frac{1}{\pi}\cal{P}\int_{-\infty}^\infty\frac{f_2(x’)}{x’-x}dx’$$
and
$$f_2(x)=-\frac{1}{\pi}\cal{P}\int_{-\infty}^\infty\frac{f_1(x’)}{x’-x}dx’$$
so you can compute the real part of $f$ from its imaginary part and vice versa.
In physics the impulse response of a physical system frequently satisfies the preconditions for the relations to hold. For example, in optics the real part of the impulse response is related to the refractive index and the imaginary part is related to attenuation. This makes it possible to compute the refractive index at varying frequencies just from knowing the attenuation at different frequencies. It’s almost magic that these things are connected so straightforwardly in this way but it’s hard to see if you don’t go complex.
Other examples include the way audio engineers can read off the phase delay of a component (modulo certain preconditions) from its Bode plot and similar phenomena in the study of oscillating mechanical systems and in particle physics.
I also came to the same impression as you over time about complex analysis being “more real.”
Analytic number theory uses $L$-functions for many arithmetic formulas and theorems. For instance:
Prime number theorem: Based on numerical data Legendre conjectured $\pi(x)\approx\frac{x}{A\log x+B}$, and it wasn’t until half a century later when Chebyshev and Riemann considered the Riemann zeta function and contour integration, which when extended by Hadamard and de la Vallée-Poussin led to the first proof of $\pi(x)\sim\frac{x}{\log x}$. Selberg and Erdos were able to come with an “elementary” proof about a century later. While there are many proofs, nothing is as special or insightful as the connection between prime counting and the zeros of the Riemann zeta function.
Dirichlet’s theorem: The effective version of this states that residues of primes are equidistributed among units modulo any number. While Selberg gave an elementary proof (I am not sure if this was for the effective or existential version) and there is a digestible proof for $p\equiv1$ mod $n$ (using the splitting theory of primes in cyclotomic fields), the textbook proof is using Dirichlet $L$-functions.
(In my opinion, there is no reason to narrow the discussion down to Dirichlet’s theorem when we can talk about the more general and beautiful Chebotarev Density theorem instead – which also uses $L$-functions in the same fashion for its proof – other than it is not as well-known or accessible to readers.)
I think I’ve read before, or at least got the impression reading, that originally people were suspicious of non-elementary proofs in number theory, or at least preferred elementary proofs for being more “direct,” until it became clear that complex analysis actually got to the heart of things very well. Or maybe it wasn’t that people originally thought that, but some students believe this when they first start to learn number theory without first learning and thus appreciating complex analysis. At any rate, my memory is more definitive that I’ve had to reassure a very talented but at the time inexperienced user here on MSE that Dirichlet characters were indeed natural for Dirichlet’s theorem. Also, Andrew Schlafly hilariously disputes the superiority of complex analysis proofs (but nobody takes Conservapeida seriously).
Based on my experience, I believe the two main reasons complex analysis is “more real” is because of the tools of contour integration and orthogonality relations a la harmonic analysis.
There are some very nice theorems that are relatively easy to prove and that only hold in complex analysis. For example:
Complex differentiability: If a function is complex differentiable once and the derivative is continuous then it is infinitely differentiable. Clearly not true in real analysis.
Power series: Not only are they infinitely differentiable but one can expand them locally into a convergent power series, i.e. they are analytic. Which fails completely in real analysis, there are infinitely differentiable functions that are nowhere analytic.
Cauchy Integral Formula: The values of a function holomorphic on a domain and continuous up to its boundary only depend on its values on the boundary, and can be recovered explicitly by a single integration. This can be used to solve boundary value problems for the Laplace equation.
Liouville’s theorem: If a function is differentiable and bounded on $\mathbb C$, then it is constant. A very strong property, in my opinion, but an easy corollary of the Cauchy Integral Formula. And a useful tool for proving other theorems, gives arguably the simplest proof of the Fundamental Theorem of Algebra.
Maximum Modulus Principle: The absolute value of a function holomorphic on a domain attains its maximum on the boundary. Which gives us an easy way to find a priori bounds for holomorphic functions or prove that they are constant, among other things.
Enumerative combinatorics is a good example. If you solve some problem using generating functions, then the solution is given as the coefficient of $x^n$ of some complicated function. The asymptotic behavior of this can usually be extracted easily by writing it as a contour integral and then either considering the dominant contribution from the residues inside (or outside) the contour, or you can do a steepest descent calculation.
Seen from an algebraic perspective:
The complex numbers are at the peak.
Going ‘downwards’ you loose:
Multiplicative Inverses -> Additive Inverses
Going ‘upwards’ you loose:
Commutativity -> Associativity
Some things about complex analysis seem special when in fact those results are better viewed through the lens of vector calculus.
For instance, any holomorphic function can be viewed instead as a vector field with vanishing divergence and curl. Consider the Cauchy integral theorem and Stokes’ theorem:
$$\begin{align*}\text{Cauchy integral theorem: }& \oint_C f(z) \, dz = 0, \quad \left(\frac{\partial f}{\partial \bar z} = 0 \right)\\
\text{Stokes’ theorem: } & \oint_C F \cdot d\ell = 0, \quad (\nabla \times F = 0) \\
\text{Stokes’ theorem corollary: }& \oint_C F \times d\ell = 0, \quad (\nabla \cdot F = 0)
\end{align*}$$
Now, you may be tempted to say that the complex analysis version of this result is more elegant because it’s one line instead of two. You can package the vector calculus result into one line with the proper tools (clifford algebra) as well, though, and more importantly, when you properly identify the derivatives as exterior and interior derivatives, the result generalizes to arbitrarily high-dimensional real vector spaces.
Similarly, meromorphic functions correspond to vector fields with point sources, but that’s usually where complex analysis stops, while vector calculus has the tools to treat vector fields with arbitrary sources.
That quantum mechanics is written in terms of complex numbers is, in my opinion, a pedagogical mistake. It would be clearer to do quantum-style probability in terms of vectors in $\mathbb R^2$; of course, the math will ultimately come out the same, but I think the picture of things would be considerably cleaner.
Complex analysis also puts a lot of emphasis on complex differentiability, but as you can see from the above argument about the CIT and Stokes’ theorem, it’s actually differentiation with respect to $\bar z$ that relates very directly to vector calculus. Complex differentiation is utterly insignificant when considering more general applications.