# Do complex numbers really exist?

Complex numbers involve the square root of negative one, and most non-mathematicians find it hard to accept that such a number is meaningful. In contrast, they feel that real numbers have an obvious and intuitive meaning. What’s the best way to explain to a non-mathematician that complex numbers are necessary and meaningful, in the same way that real numbers are?

This is not a Platonic question about the reality of mathematics, or whether abstractions are as real as physical entities, but an attempt to bridge a comprehension gap that many people experience when encountering complex numbers for the first time. The wording, although provocative, is deliberately designed to match the way that many people do in fact ask this question.

#### Solutions Collecting From Web of "Do complex numbers really exist?"

There are a few good answers to this question, depending on the audience. I’ve used all of these on occasion.

A way to solve polynomials

We came up with equations like $x – 5 = 0$, what is $x$?, and the naturals solved them (easily). Then we asked, “wait, what about $x + 5 = 0$?” So we invented negative numbers. Then we asked “wait, what about $2x = 1$?” So we invented rational numbers. Then we asked “wait, what about $x^2 = 2$?” so we invented irrational numbers.

Finally, we asked, “wait, what about $x^2 = -1$?” This is the only question that was left, so we decided to invent the “imaginary” numbers to solve it. All the other numbers, at some point, didn’t exist and didn’t seem “real”, but now they’re fine. Now that we have imaginary numbers, we can solve every polynomial, so it makes sense that that’s the last place to stop.

Pairs of numbers

This explanation goes the route of redefinition. Tell the listener to forget everything he or she knows about imaginary numbers. You’re defining a new number system, only now there are always pairs of numbers. Why? For fun. Then go through explaining how addition/multiplication work. Try and find a good “realistic” use of pairs of numbers (many exist).

Then, show that in this system, $(0,1) * (0,1) = (-1,0)$, in other words, we’ve defined a new system, under which it makes sense to say that $\sqrt{-1} = i$, when $i=(0,1)$. And that’s really all there is to imaginary numbers: a definition of a new number system, which makes sense to use in most places. And under that system, there is an answer to $\sqrt{-1}$.

The historical explanation

Explain the history of the imaginary numbers. Showing that mathematicians also fought against them for a long time helps people understand the mathematical process, i.e., that it’s all definitions in the end.

I’m a little rusty, but I think there were certain equations that kept having parts of them which used $\sqrt{-1}$, and the mathematicians kept throwing out the equations since there is no such thing.

Then, one mathematician decided to just “roll with it”, and kept working, and found out that all those square roots cancelled each other out.

Amazingly, the answer that was left was the correct answer (he was working on finding roots of polynomials, I think). Which lead him to think that there was a valid reason to use $\sqrt{-1}$, even if it took a long time to understand it.

The concept of mathematical numbers and “existing” is a tricky one. What actually “exists”?

Do negative numbers exist? Of course they do not. You can’t have a negative number of apples.

Yet, the beauty of negative numbers is that when we define them (rigorously), then all of a sudden we can use them to solve problems we were never ever able to solve before, or we can solve them in a much simpler way.

Imagine trying to do simple physics without the idea of negative numbers!

But are they “real”? Do they “exist”? No, they don’t. But they are just tools that help us solve real life problems.

To go back to your question about complex numbers, I would say that the idea that they exist or not has no bearing on whether they are actually useful in solving the problems of every day life, or making them many, many, many times more easy to solve.

The math that makes your computer run involves the tool that is complex numbers, for instance.

No number does “really exist” the way trees or atoms exist.
In physics people however have found use for complex numbers just as they have found use for real numbers.

One need only consult the history of algebra to find many informal
discussions on the existence and consistence of complex numbers.
Any informal attempt to justify the existence of $\mathbb C$
will face the same obstacles that existed in earlier times. Namely, the lack of any rigorous (set-theoretic) foundation makes it difficult to be precise – both syntactically and semantically. Nowadays the set-theoretic foundation of algebraic structures
is so subconscious that is is easy to overlook just how much power it provides
for such purposes. But this oversight is easily remedied. One need only consult
some of the older literature where even leading mathematicians struggled
immensely to rigorously define complex numbers. For example, see the
quote below by Cauchy and Hankel’s scathing critique – which is guaranteed
to make your jaw drop! (Below is an excerpt from my post on the notion
of formal polynomial rings and their quotients).

A major accomplishment of the set-theoretical definition of
algebraic structures was to eliminate imprecise syntax and semantics.
Eliminating the syntactic polynomial term $\rm\ a+b\cdot x+c\cdot x^2\$
and replacing it by its rigorous set-theoretic semantic reduction
$\rm\:(a,b,c,0,0,\ldots)\:$ eliminates many ambiguities. There can no longer be any doubt about the precise denotation of the symbols $\rm\: x,\; +,\;\cdot\:,$ or about the meaning of equality of polynomials, since, by set theoretic definition, tuples are equal iff their components are equal. The set-theoretic representation (“implementation”) of these algebraic objects gives them rigorous meaning, reducing their semantics to that of set-theory.

Similarly for complex numbers $\rm\,a + b\cdot {\it i}\$
and their set-theoretic representation by Hamilton as pairs of reals $\rm\,(a,b).\,$ Before Hamilton gave this semantic reduction of $\,\mathbb C\,$ to $\Bbb R^2,\,$ prior syntactic constructions (e.g. by Cauchy) as
formal expressions or terms $\rm\:a+b\cdot {\it i}\:$ were subject to heavy criticism regarding
the precise denotation of their constituent symbols, e.g.
precisely what is the meaning of the symbols $\rm\;{\it i},\, +,\, =\,?\,$
In modern language, Cauchy’s construction of $\mathbb C$ is simply the
the quotient ring $\rm\:\mathbb R[x]/(x^2+1)\cong \Bbb R[{\it i}],\,$ which he described essentially
as real polynomial expressions modulo $\rm\:x^2+1\:$. However, in Cauchy’s time
mathematics lacked the necessary (set-theory) foundations to
rigorously define the syntactic expressions comprising the
polynomial ring term-algebra $\rm\mathbb R[x]$, and its quotient ring of
congruence classes $\rm\:(mod\ x^2+1).\,$ The best that Cauchy could
do was to attempt to describe the constructions in terms of
imprecise natural (human) language, e.g, in 1821 Cauchy wrote:

In analysis, we call a symbolic expression any combination of
symbols or algebraic signs which means nothing by itself but
which one attributes a value different from the one it should
naturally be […] Similarly, we call symbolic equations those
that, taken literally and interpreted according to conventions
generally established, are inaccurate or have no meaning, but
from which can be deduced accurate results, by changing and
altering, according to fixed rules, the equations or symbols
within […] Among the symbolic expressions and equations
whose theory is of considerable importance in analysis, one
distinguishes especially those that have been called imaginary. $\quad$ — Cauchy, Cours d’analyse,1821, S.7.1

While nowadays, using set theory, we can rigorously interpret such “symbolic expressions”
as terms of formal languages or term algebras, it was far too
imprecise in Cauchy’s time to have any hope of making sense
to his colleagues, e.g. Hankel replied scathingly:

If one were to give a critique of this reasoning, we can not
actually see where to start. There must be something “which
means nothing,” or “which is assigned a different value than
it should naturally be” something that has “no sense” or is
“incorrect”, coupled with another similar kind, producing
something real. There must be “algebraic signs” – are these
signs for quantities or what? as a sign must designate something
– combined with each other in a way that has “a meaning.” I do
not think I’m exaggerating in calling this an unintelligible
play on words, ill-becoming of mathematics, which is proud
and rightly proud of the clarity and evidence of its concepts. $\quad$– Hankel

Thus it comes as no surprise that Hamilton’s elimination
of such “meaningless” symbols – in favor of pairs of reals –
served as a major step forward in placing complex numbers on a
foundation more amenable to his contemporaries.
Although there was not yet any theory of sets in which to
rigorously axiomatize the notion of pairs, they were far easier
to accept naively – esp. given the already known closely
associated geometric interpretation of complex numbers.
Hamilton introduced pairs as ‘couples’ in 1837 [1]:

p. 6: The author acknowledges with pleasure that he agrees with
M. Cauchy, in considering every (so-called) Imaginary Equation
as a symbolic representation of two separate Real Equations:
but he differs from that excellent mathematician in his method
generally, and especially in not introducing the sign sqrt(-1)
until he has provided for it, by his Theory of Couples,
a possible and real meaning, as a symbol of the couple (0,1)

p. 111: But because Mr. Graves employed, in his reasoning, the
usual principles respecting about Imaginary Quantities, and
was content to prove the symbolical necessity without showing
the interpretation, or inner meaning, of his formulae, the
present Theory of Couples is published to make manifest that
hidden meaning: and to show, by this remarkable instance, that
expressions which seem according to common views to be merely
symbolical, and quite incapable of being interpreted, may pass
into the world of thoughts, and acquire reality and significance,
if Algebra be viewed as not a mere Art or Language, but as the
Science of Pure Time. $\quad$ — Hamilton, 1837

Not until the much later development of set-theory was it explicitly realized
that ordered pairs and, more generally, n-tuples, serve a fundamental foundational role, providing the raw materials necessary to construct composite (sum/product) structures – the raw materials required for the above constructions of polynomial rings and their quotients.
Indeed, as Akihiro Kanamori wrote on p. 289 (17) of
his very interesting paper [2] on the history of set theory:

In 1897 Peano explicitly formulated the ordered pair using
$\rm\:(x, y)\:$ and moreover raised the two main points about the
ordered pair: First, equation 18 of his Definitions stated
the instrumental property which is all that is required of
the ordered pair:

$$\rm (x,y) = (a,b) \ \ \iff \ \ x = a \ \ and\ \ y = b$$

Second, he broached the possibility of reducibility, writing:
“The idea of a pair is fundamental, i.e., we do not know how
to express it using the preceding symbols.”

Once set-theory was fully developed one had the raw materials
(syntax and semantics) to provide rigorous constructions of
algebraic structures and precise languages for term algebras. The polynomial ring $\rm\:R[x]\:$ is nowadays just a special case of much more general constructions of free algebras. Such equationally axiomatized algebras and their genesis via so-called ‘universal mapping properties’ are topics discussed at length in any course on Universal Algebra –
e.g. see Bergman [3] for a particularly lucid presentation.

[1] William Rowan Hamilton. Theory of conjugate functions, or algebraic couples; with a preliminary and elementary essay on algebra as the science of pure time
Trans. Royal Irish Academy, v.17, part 1 (1837), pp. 293-422.)
http://www.maths.tcd.ie/pub/HistMath/People/Hamilton/PureTime/PureTime.pdf

[2] Akihiro Kanamori. The Empty Set, the Singleton, and the Ordered Pair
The Bulletin of Symbolic Logic, Vol. 9, No. 3. (Sep., 2003), pp. 273-298.
http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.95.9839
PS http://www.math.ucla.edu/~asl/bsl/0903/0903-001.ps
PDF http://ifile.it/b20c48j

[3] George M. Bergman. An Invitation to General Algebra and Universal Constructions.
PS http://math.berkeley.edu/~gbergman/245/
PDF http://ifile.it/yquj5w1

In my opinion, the most natural way to view complex number is as a class of maps from the plane to itself. Specifically, lets define $(R, \theta)$ to be the map which multiplies every point in the plane by the number $R$, and then rotates it by the angle $\theta$. We may call these maps “dilations with rotations.”

Such maps can be added and composed (multiplied) in the obvious way, and its not hard to work out that the sum and product of two such mappings is another dilation with rotation.

We can also identify the real number $x$ with the map $(x,0)$, i.e. the map which multiplies every point in the plane by $x$. Then we see that these maps have the magical property that $-1$ has a square root! Namely, if $P$ is the mapping $(1,\pi/2)$ (i.e. rotate every point by angle $\pi/2$), then applying $P$ twice is the same as multiplying every number by $-1$, i.e. $P^2=-1$!

As should be obvious by now, these maps are just complex numbers in disguise.

Unsurprisingly, they are
singularly useful for solving polynomial equations. Indeed, the real number $x’$ is a root of the polynomial equations $a_0 x^n + a_1 x^{n-1} + \cdots + a_n =0$ if and only if the mapping $(x’,0)$ satisfies the same equation. So viewing polynomial equations over the set of these mappings loses no solutions, while at the same time giving us additional freedom to do operations such as taking the square roots of negative numbers.

I am not going to give you ways of showing that complex numbers are necessary and meaningful, rather an idea about why I think people find them meaningless and how this can be resolved.

I am prompted by the part of your question that says: “…most non-mathematicians find it hard to accept that such a number is meaningful. In contrast, they feel that real numbers have an obvious and intuitive meaning.” I have thought about this a lot in the past and have come to the conclusion that the problem in finding complex numbers meaningful lies in the meaning we have attached to the number systems “preceding” them in the hierachical chain: $\mathbb{N}, \mathbb{Z}, \mathbb{Q}, \mathbb{R}$. The meaning we have attached is that of quantity! Numbers all the way up to and including the reals are scalar quantities: We use them for distance, area, volume, weight, speed, intensity and so on. However, when you get to the complex numbers all that has to go away. There is no such thing as $2 + 3i$ kilogrammes or $-4i$ dollars or $1-5i$ centimetres… Yet we ask the learner to call them numbers!

So, the learner is faced with an apparent contradiction in terms: They are asked to think of these strange entities as numbers while at the same time these new additions to the realm don’t behave as the numbers of old. And you should take into account that the older numbers have been around for a long time. That’s where I think the problem lies.

To circumvent this, I think that what one needs to do is explain to the learner that – from now on – numbers are going to take on a much wider role than before: They are going to be used not only to denote quantities, but also to denote directions, which is precisely what complex numbers do on the two-dimensional plane. Every complex number can be represented by a vector and this automatically suggests a direction. The “older” meaning is not lost, since a complex number has a modulus in addition to its real and imaginary parts and these three are quantities.

Of course, one might say “But real numbers denote directions as well: +1 denotes a unit displacement to the right of 0 along the x-axis while -1 denotes a displacement to the left”. This is indeed true, however, I sincerely doubt that the untrained mind of a learner, who encounters complex numbers for the first time, will delve in that direction. And, if it does, then that’s fine, because it shows that the direction concept was already lurking in the number hierarchy ever since negative integers were introduced!!

I hope all this is of help to you.

To the degree that anything actually “exists” in math, yes complex number exist.

Once you accept that groups, rings and fields exist, and that isomorphism of rings makes sense, complex numbers can be recognized as (isomorphic to) the subring (which happens to be a field) of the ring of $2 \times 2$ real matrices.

Generators of this subring are the following matrices

$\begin{vmatrix} 1 & 0 \\ 0 & 1 \\ \end{vmatrix} \;$
and
$\; \begin{vmatrix} 0 & -1 \\ 1 & 0 \\ \end{vmatrix}$

which correspond to $1$ and $i$ in the normal notation of complex numbers.

As people tend to accept that matrices exist, this may be a convincing argument.

Quantum mechanics, and hence physics and everything around us, fundamentally involves complex numbers.

You may be interested to read the MathOverflow question “Demystifying Complex Numbers,” here. A teacher is asking how to motivate complex numbers to students taking complex analysis.

Here is an example of a physical quantity which comes naturally as a complex number.

The impedance between two nodes of a linear electrical circuit working in alternate current, which is the analogous of the resistance of a resistor and is also measured in Ohms, is a complex quantity. For instance, the impedance of an (ideal) capacitor is imaginary.

I’ll start by pointing out that a whole host of things that people think of as ‘real’ are on shakier ground than imaginary numbers. Given that quantum mechanics predicts a fundamental limit to how granular reality is, the whole concept of real numbers is on very shakey ground, yet people accept those as fine. I’d therefore suggest that it is merely a case of familiarity – people are less familiar with complex numbers than with some other mathematical constructs.

As for an actual existence outside the realms of pure maths… your best bet is to look at quantum mechanics again. This area has some fascinating results that are only possible through the use of imaginary numbers. Incidentally, fundamental particles are the place in nature that gave a ‘physicality’ to negative numbers (the charge of an electron is negative) well after they were accepted as normal by most people.

I think some confusion comes from calling the numbers that only have imaginary part “imaginary numbers”. I believe if mathematicians had other name to give to them (but I am really not thinking on a better name), there would be less confusion.

Yes, as much as any number “exists”. (We could say a mathematical “object” exists if it models something empirical.)

Here’s a way to visualize a vector “pointing” in the $i$ direction. If electrical power is traveling 100% efficiently from power generator to your home, then it’s pointing in the direction $\langle 1 \rangle$ (or flip-flopping $\langle 1 \rangle$ and $\langle -1 \rangle$ for A/C). If the power points in the direction of $\langle \sqrt{-1} \rangle$ then the wire will heat up but transmit no useful electricity.

Are real numbers “real”? It’s not even computationally possible to compare two real numbers for equality!

Interestingly enough, it is shown in Abstract Algebra courses that the idea of complex numbers arises naturally from the idea of real numbers – you could not say, for instance, that the real numbers are valid but the complex numbers aren’t (whatever your definition of valid is…)

Complex numbers are the final step in a sequence of increasingly “unreal” extensions to the number system that humans have found it necessary to add over the centuries in order to express significant numerical concepts.

The first such “unreal number” was zero, back in the mists of time. It seems obvious to us now, but it must have seemed strange at first. How can the number of sheep I have be zero, when I don’t actually have any sheep?

Negative numbers are the next most obvious addition to the family of numbers. But what does it mean to have -2 apples? If I have 3 apples and you have 5, it’s convenient to be able to say that I have -2 more apples than you. Even so, during the middle ages many mathematicians were very uncomfortable with the idea of negative numbers and tried to arrange their equations so that they didn’t occur.

Rationals (fractions) seem real enough, since I’m happy to have 2/5 of a pizza. However, this is not the number of pizzas that I have, just a ratio between 0 and 1 pizzas, and so is further removed from the concept of counting.

More significant philosophical problems problems arose when irrational numbers were first discovered by the classical Greeks. They were astonished when Euclid (or one of his predecessors) proved that the diagonal of a unit square could not be represented by ratio of two integers. This was such an outrageous idea to them that they called these numbers “irrational”. However, they couldn’t easily deny their existence since they have a direct geometric representation.

Irrational numbers were first understood as the solutions to algebraic equations such as x^2 = 2 but this still doesn’t cover all the possible numbers that we need. For example, the ratio of a circle’s diameter to its circumference is π and so has a direct geometric representation. However, in 1882 π was proved to be transcendental, meaning that it can’t be defined as the solution to a specific algebraic equation. This begins to seem a lot less “real”, especially when you consider that there are many important transcendental numbers that, unlike π, don’t have a geometric interpretation.

There are of course many algebraic equations that don’t have a solution even among the irrational numbers, and in some ways there’s no reason why they should. However, when 16th century mathematicians like Gerolamo Cardano began working on solutions to cubic equations they found that the square roots of negative numbers began cropping up very naturally in their procedures, even though the solutions themselves were purely real. This eventually led people to explore the arithmetic of complex numbers and they were surprised to find that it produced a consistent and beautiful theory.

However, complex numbers don’t have such an intuitively obvious geometrical meaning as the numbers that came before. They are typically represented graphically as points in the 2D plane, and the rules of addition and multiplication are equivalent to certain operations on lengths and angles, but those operations aren’t driven by geometrical necessity in quite the way that squares and circles are. Even so, complex numbers are a perfect representation for various physical phenomena such as the state of particles in quantum mechanics and the behaviour of varying currents in electrical circuits. They are also very useful for reducing the cost of computation in 3D computer graphics.

The really special thing about complex numbers, though, is that they are the end of this journey that has been going on for millenia. There is no need to invent further number systems to provide solutions to problems expressed in terms of complex numbers because now every non-contradictory equation, algebraic or transcendental, has a solution within the complex numbers. They are a self-sufficient, consistent system.

Physicists have been breaking matter down into smaller and smaller particles over the years. Molecules, atoms, nuclei, protons, quarks. When will the process stop? For several decades it was felt that quarks could be the final, indivisible particle. However, new theoretical frameworks such as string theory are suggesting that there may be more fundamental entities than quarks. Because physics ultimately relies on experimental verification, we can never be sure that there aren’t going to be more steps in the sequence.

In mathematics, however, we can prove things for all time. Complex numbers are the final step in the sequence, the numbers that we have been reaching for since before the beginning of recorded history. Every other number system is just a subset of the complex numbers, just a part of the true picture. Complex numbers are the real thing.

As phrased, your question invites a philosophical answer but you said that’s not what you want. One way to approach this question is to ask if complex numbers correspond to anything in the real world. What can you count or measure with them?

As far as I can tell, complex numbers are most directly useful for measuring things that rotate or oscillate. They’re used by electrical engineers because voltages and currents can oscillate, and could be used for measuring springs or pendulums, or for anything that behaves like a wave. There are not that many situations in day-to-day life where you’d use complex numbers, but they’re used extensively in physics because waves and oscillations show up everywhere.

There are geometric interpretations of imaginary numbers where they are thought of as parallelograms with a front and back, or oriented parallelograms. That interpretation requires geometric algebra but only uses real numbers.

That doesn’t have any pictures so it is admittedly not intuitive, but the answer is yes. Whether you think of imaginary numbers as square root of negative 1 or as parallelogram with a front and back, they exist.

I usually say: “Believe it or not, electricity and radio waves actually do behave like complex numbers. You don’t see that in high school, but electric and electronics engineers do.”

The argument isn’t worth having, as you disagree about what it means for something to ‘exist’. There are many interesting mathematical objects which don’t have an obvious physical counterpart. What does it mean for the Monster group to exist?

I have just discovered this web page and your question. This is probably a late response.

A lot of people think that at some point someone came along and simply asserted “yes there is a square root of -1 after all” and called it ‘i’. That’s a common misconception, and, unfortunately, seems to be what is often “taught” in “high schools” in the US.

The set of Real Numbers that you are used to is an example of what mathematicians call a “field”. That means you can add, subtract, multiply, and divide according to familiar properties. The field C contains the Reals: it is just the two-dimensional plane endowed with a very natural addition (vectors) and a really cool multiplication, which takes a little effort to understand.

A complex number is just a point on the plane, an ordered pair. The absolute value of a complex number is its distance to the origin, and let’s call its “angle” the angle it forms with the positive x-axis. The Real numbers are just the x-axis, and “i” is just (0,1). So the real number 1 is (1,0) and -1 is (-1,0). Then multiplying complex numbers multiplies their absolute values and adds the angles. That’s why (0,1) times itself is -1 = (-1,0). 90 degrees plus 90 degrees = 180 degrees.

There is essentially no other way to imbed R (the reals) in a field in which all numbers have square roots. If you want to be technical, any other such field is isomorphic to C. Furthermore, there is no way to make any higher R^n (R^3 = space for example) into a field in a natural way. The quaternions are a way to make R^4 into a division ring, close but not a field.

So, in short, the field of complex numbers is extremely real and concrete.

Last March Steven Strogatz wrote a column about C = the field of complex numbers in the New York Times. You might enjoy this exchange of emails I had with someone a few months later, after Strogatz’s article appeared in the New York Times.

http://home.bway.net/lewis/Complex.htm

The question you and your students are asking is whether the concept deserves to exist, i.e., is it really a useful concept? The best answer to offer non-mathematical people is that it is at the heart of many, many applications which are perhaps simpler to understand with this abstraction. You can draw historical parallels, for example to surds (such as $\sqrt2$), which were thought not to exist from the ancient Greek rational-geometric constructive perspective. But at the least, I think, you must point out the beauty and utility of Euler’s formula
$$e^{i\theta}=\cos\theta+i\sin\theta$$
(connecting them to trigonometry)
and of the complex plane.
Perhaps DeMoivre’s formula
$$\left(\cos\theta+i\sin\theta\right)^n= \cos{n\theta}+i\sin{n\theta}$$
provides a nice example of a considerable
simplification realized through complex numbers.
Then, you should mention some of the multifarious
connections that could come to mind, both
applications (such as to alternating current & fluid dynamics)
and mathematical extensions and related areas:
vectors, quaternions, octonions,
complex (!), Fourier & Harmonic analysis,
Lie Groups, cyclotomic polynomials,
analytic number theory, etc.

Above all, one should mention that they have deep connections to algebra (e.g., its fundamental theorem) and by extension, to linear algebra and differential equations (?), two of the most useful areas of mathematics. And don’t hesitate to use pretty pictures.

There is also some debate over, and well should there be room for, different fundamental viewpoints on what mathematics is and how it should be conducted. And one should probably also include here some perspectives more historically associated with statistics, such as pragmatism. With this in mind, one should have some tolerance for, or at least understanding of, non-mathematicians’ continued skepticism regarding mathematical concepts, since the proof of a concept’s utility is contingent on sufficient experience with it and, even within the field of mathematics, there are revisions and simplifications and room for alternative approaches.

Even natural numbers are abstract notions. Through abstract constructions the complex numbers can be demonstrated. So asking whether complex numbers is something like asking whether the natural numbers exist, whether the real numbers exist, or not. They are all things in the abstract domain of thought.

If you want to reconcile it with your physical intuition, try the following. I suppose that your mental picture of the real numbers is by associating a line to it. Similarly, you can associate a plane to the set of complex numbers. Points are complex numbers. The real numbers will be the x-axis and imaginary numbers would be the y-axis. Addition is straightforward to visualize. For visualizing multiplication, write a complex number in the form $re^{i\theta}$. Then multiplication is a combination of stretching by a magnitude of $r$ and rotation by an angle of $\theta$. I hope this helps you in mental visualization.

In electrical networks, complex numbers are very good for analysis of alternating current networks. If some current has an imaginary component, it means that the current is leading or lagging the voltage by some “phase difference”, or “angle”. A resistance having imaginary components mean that it has capacitance or inductance. Only the “real” parts contribute to energy loss. But for imepdance calculations you have to consider capacitance and inductance too. A similar analysis can be made for every kind of control systems. So complex number are all very much there in nature and all sorts of engineering problems. I cannot stress their importance enough.

Ordered pairs exist.
If you define an operation on something that exists, the thing ‘with the operation’ exists.
Certain operations can be defined on ordered pairs
Complex numbers are ordered pairs with those operations defined on them.
∴ Complex numbers exist.

Since nobody has talked about Caspar Wessel I will, mainly because of this:

In 1799, Wessel was the first person to describe the geometrical interpretation of complex numbers as points in the complex plane.

As you can read from previous answers, all numbers are mathematical abstraction. $1$ is not even defined in the real world, it is an abstraction to quantify what surrounds us. To every object we have, we define it’s “unity”: one potato is the whole elipsoidal tuber. But then if we cut it in half we’d say we have “half” a potato. We’re representing a concrete object with the abstract notion of unity, but the number $1$ itself is our first numerical abstraction. What makes it so awfully mundane is that we are very used to unity. And, thus, we march steadily to $2$, $3$, $\dots$, to get what we name the “natural” numbers: $\mathbb{N}$. They are those who naturally arise in our day to day life, which we use to count and order, among other uses.

Now, as you would probably imagine, we move onto integers. But as you are now doubtfull about the notion of complex numbers, many mathematicians were too about the negatives! They thought they were “impossible” and didn’t considered them numbers:

Prior to the concept of negative numbers, negative solutions to problems were considered “false” and equations requiring negative solutions were described as absurd.

Although the first set of rules for dealing with negative numbers was stated in the 7th century by the Indian mathematician Brahmagupta, it is surprising that in 1758 the British mathematician Francis Maseres was claiming that negative numbers
“… darken the very whole doctrines of the equations and make dark of the things which are in their nature excessively obvious and simple” .

So much for the “simple” integers, $\mathbb{Z}$. We can go on to the rational numbers $\mathbb{Q}$, the irrationals, and the real numbers $\mathbb{R}$, but let me focus on Wessels’ genius. He named his paper “On the Analytical Representation of Direction; An Attempt.” and began with

“This present attempt deals with the question, how may be represet direction analytically; that is, how shall we express right lines so that in a single equation involving one unknow line and others known, both the length and the direction of the unknown line may be expressed.”

He gives two propostition which seem to him “undeniable”:

1. “…changes in direction which can be effected by algebraic operations shall be indicated by their signs.”

2. “…direction is not a subject for algebra except in so far as it can be changed by algebraic operations.”

He then explains how sum and substraction should be defined (he basically defines vector addition and substractino) and what he’s aiming in this exposition, among other remarks. But here’s the interesting part, after he defines multiplication:

“…so that the angle of the product (…) becomes equal to the sum of the direction angles of the factors.”

“Let $+1$ designate the positive rectilinear unit and $+\epsilon$ a certain other unit perpendicular to the positive unit and having the same origin; then the direction angle of $+1$ will be equal to $0º$, that of $-1$ to $180º$, that of $+\epsilon$ to $90º$ and that of $-\epsilon$ to $-90º$ or $270º$. By the rule that the direction angle of the product shal equal to sum of the angles of the factors, we have: $(+1)(+1)=+1$;$(+1)(-1)=-1$;$(-1)(-1)=+1$;$(+1)(+\epsilon)=+\epsilon$;$(+1)(-\epsilon)=-\epsilon$;$(-1)(+\epsilon)=-\epsilon$;$(-1)(-\epsilon)=+\epsilon$;$(+\epsilon)(+\epsilon)=-1$;$(+\epsilon)(-\epsilon)=+1$;$(-\epsilon)(-\epsilon)=-1$.
From this it is seen that $\epsilon$ is equal to $\sqrt{-1}$ (…)”

He then, after some reasoning and trignometrical though gives the know representation:

“In agreement with 1 and 6, the radius which begins at the center and diverges from the absolute or positive unit by angle $v$ is equal to $\cos v + \epsilon \sin v$.

But what is most amazing is that he then gives a great ending to his paper!

Without knowing the angle which the indirect line $1+x$ makes with the absolute value, we may find, if the length of $x$ is less than $1$, the power ${\left( {1 + x} \right)^m} = 1 + \dfrac{{mx}}{1} + \dfrac{m}{1}\dfrac{{m – 1}}{2}{x^2} + \operatorname{etc}.$ If this series is arranged according to the powers of $m$, it has the same value and is changed into the form
$$1 + \frac{{ml}}{1} + \frac{{{m^2}{l^2}}}{1\cdot2} + \frac{{{m^3}{l^3}}}{{1 \cdot 2 \cdot 3}} + \operatorname{etc}.$$
where
$$l = x – \frac{{{x^2}}}{2} + \frac{{{x^3}}}{3} – \frac{{{x^4}}}{4} + etc.$$
and is a sum of a direct [horizontal] and a perpendicular line. If we call the direct line $a$ and the perpendicular $b\sqrt{-1}$ then $b$ is the smallest measure of the angle which $1+x$ makes with $+1$. If we set
$$1 + \frac{1}{1} + \frac{1}{{1 \cdot 2}} + \frac{1}{{1 \cdot 2 \cdot 3}} + \operatorname{etc}. = e$$ then $${\left( {1 + x} \right)^m}$$ (…) may be represented by ${e^{ma + mb\sqrt { – 1} }}$ (…)”

So in trying to give direction an analytical representation Caspar Wessel has given the most important rules to represent complex numbers as lines in the plane, and made them be meaningfull for us.

I hope you enjoyed this as much as I did when I read it. If you want, I can give you the whole article, which is amazingly interesting.

We will will first consider the most common definition of $i$, as the square root of $-1$. When you first hear this, it sounds crazy. $0$ squared is $0$; a positive times a positive is positive and a negative times a negative is positive too. So there doesn’t actually appear to be any number that we can square to get $-1$.

A mathematician would collectively term $0$, negative numbers and positive numbers as the real numbers. They would also define the term complex numbers as a group of numbers that includes these real numbers. So while we have shown that no real number can square to get $-1$, we haven’t even defined complex numbers at this point, so we can’t rule out that one might have this property.

At this point, it makes sense to ask what does a mathematician mean by a number? It certainly isn’t what most people associate it with – as an abstract representation some kind of real world quantity. We need to understand that it isn’t uncommon for one word to have different meanings for different groups of people – after all words mean whatever we make them mean. Most people only need to real world quantities, so they find it convenient to call those numbers. On the other hand, mathematicians explore a variety of different number systems. Indeed some, such as complex numbers, are useful for solving problems that are actually about real numbers.

So mathematicians define $i$ as a number that obeys most of the normal algebraic laws. They also defined $i*i$ to equal $-1$. From this we can derive all of the standard results about complex numbers.

As for whether they are real – it depends on what you want to know. Obviously, they don’t correspond to quantities of physical objects. On the other hand, complex numbers can useful for representing resistance in an electric circuit. Ultimately, they are an idea and while ideas don’t exist physically, saying they don’t exist at all is inaccurate.

Here is a possible line of reasoning.

Say, we start with natural numbers, $\mathbb{N}$, then add $0$, making them the whole numbers $\mathbb{N}_0$, then add negative numbers, making them integers $\mathbb{Z}$, then expand them to rationals $\mathbb{Q}$, and finally to reals $\mathbb{R}$. In some respect, we keep filling up the line, till there are no gaps left.

Now the question: why do we call them “numbers”? What are the properties we expect from some object so we can call it a number. In fact, just a few: addition and multiplication must be defined for all of them, and we want these operations to be commutative and associative. The distributive law of multiplication over addition must hold. It’s nice to have a notion of ordering, so we can say $a \lt b$, and multiplication by some positive number must be compatible with that notion: if $a \lt b$ and $x$ is positive, then $ax \lt bx$. Those are the minimal requirements, met by $\mathbb{N}$. As we expand our numbering system, we get more and more useful properties, and the structure (both algebraic and topological) gets more complicated: from semigroup in case of $\mathbb{N}$ to complete ordered field in case of $\mathbb{R}$, but those basic ones always hold.

Now, the next step in our “quest for expanding our system” would be to look at $\mathbb{R}^2$, $\mathbb{R}^3$ etc. In general, the objects (“points”) populating the space $\mathbb{R}^n$ are called “vectors” and they differ from numbers in one important respect: we can’t define vector multiplication so it meets our requirements for numbers (commutativity, associativity and having multiplication distributed over addition). So we don’t treat vectors in $\mathbb{R}^n$ as the numbers, in particular, we don’t think of them as atomic objects, but rather in terms of the components or coordinates.

There’s one important exception, however: i is $\mathbb{R}^2$. In fact we can define multiplication of two objects (points, vectors) in $\mathbb{R}^2$ so it meets all our requirements and we can call these objects “numbers”. To be precise, all except one: the ordering. Multiplication we define is no longer compatible with the ordering. So we agree that probably not having that feature isn’t such a big deal. Once we introduced multiplication, we end up with complete field (but not ordered field) of entities which we have a right to call the numbers. So we call them the complex numbers and designate them as $\mathbb{C}$. There are some similarities between $\mathbb{R}^2$ and $\mathbb{C}$, but there are also some difference, the most important one being the fact that multiplication is defined so $\mathbb{C}$ is a field rather than vector space. In fact we can treat the elements of $\mathbb{C}$ as the scalars and use them to build a vector space, a complex matrices, etc. Well, this is roughly, why we can call complex numbers “numbers”. They exhibit the same behavior (except for ordering) as the numbers we’re familiar with do. One of my favorite examples: when we define the derivative in vector analysis, the formulas look quite different from those we got used in Calculus. But when we define the derivative for complex-valued functions, the formula is exactly the same, because multiplication and hence division) is defined.

Now, why complex numbers are important is a different question. What I wanted to focus is the fact that we have a right to call them “numbers”. But, just like we can think of rational numbers as a special case of reals, and when move from reals to rationals, we loose the important property of completeness, we can think of real numbers as a special case of complex numbers, and when we move from complex to reals we loose some nice properties as well. In many respects, studying complex numbers helps us to better understand real numbers.

As a sidenote, again, when we move from $\mathbb{R}$ to $\mathbb{C}$, we loose the ordering. Complex numbers form a complete field, but this is not an ordered field. (There is only one complete ordered field, up to isomorphism.) Now suppose we want to explore $\mathbb{R}^n$ further. That’s the next best? $\mathbb{R}^4$. We can define multiplication of objects (vectors) in $\mathbb{R}^4$ which is associative, but – alas – not commutative. It’s kind of hard to think of numbers whose multiplication is not commutative. So we no longer call them (quaternions, $\mathbb{H}$) numbers. Finally, the next interesting space is $\mathbb{R}^8$, where we still can define multiplication, but it is no longer associative (in addition to being non-commutative). It’s really hard to deal with something which is not associative, and certainly it’s hard to think of those entities as numbers.

There is a lovely way of motivating the “existence” of the complex numbers just by using a little calculus on the real numbers. I found this in Visual Complex Analysis, and it tickled me, so I thought I’d share it here, despite the lateness of the answer.

If $r_1,…,r_n$ are real numbers, define:

$$f(x)= \frac{1}{(x-r_1)(x-r_2)…(x-r_n)}$$

When $a\notin \{r_1,…,,r_n\}$ we can find the Taylor series around $a$:

$$\sum_{k=0}^\infty \frac{f^{(k)}(a)}{k!}(x-a)^k$$

The question is, for what (real) $x$ does this series converge to $f(x)$?

As it turns out, if we let $R=\min_{k} |a-r_k|$, then if $|x-a|<R$ this series converges to $f(x)$ and if $|x-a|>R$, then it doesn’t converge.

So, in a sense, the $r_i$ “block” the ability of the Taylor series to converge around them.

Now, what about the Taylor series for $g(x)=\frac{1}{x^2+1}$?

Given an $a$, this function has no “real” blockages – it is defined on all of $\mathbb R$ – but the Taylor series for $g(x)$ around $a$ has a similar $R$ value, and that $R$ value is $\sqrt{1+a^2}$, a value that can be computed entirely with real number calculations.

That then looks like there is some geometric obstruction to the Taylor series, an obstruction not on the real line, but a unit distance away from the real line in a perpendicular direction away from $0$. It “looks like” an “imaginary” root of $x^2+1=0$.

It may help to think of negative numbers as something other than “negative.”

The concept that helped me was to think of -1 as the opposite of 1.

If the context was distance, for example, and I had a value of 1, then I went one unit in a direction. If, on the other hand, I had a value of -1, then I’m traveling in the opposite direction of 1.

In calculus and series work, it will become more clear that this is a terminology problem most people have. So whether you’re dealing with space-time or voltages or something else, the negative symbol is just a relative thing, not an absolute value. In other words, don’t think of -5 as negative 5 but as something opposite of something else…

Re: “What’s the best way to explain to a non-mathematician that complex numbers are necessary and meaningful, in the same way that real numbers are?”

I am a non-mathematician…so fully qualified to answer this question…

• first explain imaginary numbers
• then explain that complex numbers are simply numbers that use imaginary numbers
• then tell ’em about how they are applied in weather forecasting
• bang!….I geddit

I got all this from an amazing radio show :

“In Our Time – Imaginary Numbers”

Melvyn Bragg and his guests discuss imaginary numbers – important mathematical phenomena which provide us with useful tools for understanding the world.

I’m of the idea that complex numbres exist in the same way real number exists, we can be ware of real number every time because they’re involved in our daily operations. Lets talk about phasors, they aren’t more that convenient ways to express senoidal equations, based on the Eulers Identity:

$$e^{jt} = \cos(t) + j \sin(t)$$
Because this is easir to express and solve differential equations that involves senoidal functions in terms of exponential notation, then they become the prefered manner for analysis of electrical circuits of alternating current (CA). Some quantities sucha as impedance (Z=V/I) where V,I are phasors, have no sense without the complex numbers. In this application, physically the complex part is associated with reactive elements (inductors and capacitors) while the real part is associated with resistive elements (resistors). Also other kinds of transformation such as Laplace transformation (for solving linear differential equations) are based on complex numbers. I have had some work about these kind of notations and transformations and their application in Electrical Enginnering.