'Obvious' theorems that are actually false

It’s one of my real analysis professor’s favourite sayings that “being obvious does not imply that it’s true”.

Now, I know a fair few examples of things that are obviously true and that can be proved to be true (like the Jordan curve theorem).

But what are some theorems (preferably short ones) which, when put into layman’s terms, the average person would claim to be true, but, which, actually, are false
(i.e. counter-intuitively-false theorems)?

The only ones that spring to my mind are the Monty Hall problem and the divergence of $\sum\limits_{n=1}^{\infty}\frac{1}{n}$ (counter-intuitive for me, at least, since $\frac{1}{n} \to 0$

I suppose, also, that $$\lim\limits_{n \to \infty}\left(1+\frac{1}{n}\right)^n = e=\sum\limits_{n=0}^{\infty}\frac{1}{n!}$$ is not obvious, since one ‘expects’ that $\left(1+\frac{1}{n}\right)^n \to (1+0)^n=1$.

I’m looking just for theorems and not their (dis)proof — I’m happy to research that myself.


Solutions Collecting From Web of "'Obvious' theorems that are actually false"

Theorem (false):

One can arbitrarily rearrange the terms in a convergent series without changing its value.

A shape with finite volume must have finite surface area.

I wish I’d thought of this yesterday, when the question was fresh, because it’s astounding. Suppose $A$ and $B$ are playing the following game: $A$ chooses two different numbers, via some method not known to $B$, writes them on slips of paper, and puts the slips in a hat.

$B$ draws one of the slips at random and examines its number. She then predicts whether it is the larger of the two numbers.

If $B$ simply flips a coin to decide her prediction, she will be correct half the time.

Obviously, there is no method that can do better than the coin flip.

But there is such a method, described in Thomas M. Cover “Pick the largest number”Open Problems in Communication and Computation Springer-Verlag, 1987, p152.

which I described briefly here, and in detail here.

I keep harping on this, because I think it’s a spectacular example of something that can be demonstrated to be completely obvious (not only because it seems so, but because it was so widely believed for so long) and yet is completely wrong:

Suppose $\Phi$ is a property that might or might not hold of some object. Then there is a collection $S_\Phi$ of all objects with property $\Phi$.

Many serious and even famous mathematicians went ahead with this intuitively obvious but utterly false principle, whose demolition shook mathematics to its foundations and marks the beginning of modern logic and set theory.

(There are many counterexamples, of which the most well known is $\Phi(x) = $ “$x$ is not a member of collection $x$”. For others, see Is the Russell paradox the only possible contradiction to the axiom schema of comprehension due to Frege (1893)? $\{x:P(x)\}$ and Paradox of General Comprehension in Set Theory, other than Russell's Paradox.)

This is elementary compared to most of the other examples, but how about

There are more rational numbers than there are integers.

In a related mathOverflow thread, Gowers pointed out the following obvious but false claim:

Let $I_1, I_2, \ldots$ be subintervals of $[0,1]$ whose total length is strictly less than 1. Then the union of the $I_i$ cannot contain $\Bbb Q\cap [0,1]$.

(Note that if $\Bbb Q$ is replaced with $\Bbb R$, the claim is true.)

I find the fact that all of $\Bbb Q$ can be covered by an arbitrarily small family of intervals to be one of the most bizarrely counterintuitive in all of mathematics.

The falseness of

Let $S$ be an infinite family of strictly positive numbers. Then $\sum S = \infty$

has been boggling people for thousands of years. It is the basis for Zeno’s paradox, but if you think Zeno’s paradox is old and tired, consider that it is also the basis for the Gabriel’s Horn paradox (also mentioned in this thread), which still puzzles people.

0.\overline{9} < 1

Probably the most famous of the “obvious” but false.

If a function $f(x)$ has an horizontal asymptote, then $\lim f'(x) = 0$

Keller’s conjecture is obviously true:

Let $\Bbb R^n$ be completely covered with identical, non-overlapping $n$-cubes. There must be two cubes that share a face.

(For example, when $n=2$ we cover the plane with little square tiles, and the conjecture states that there must be two tiles that share an edge. This is true.)

However, the conjecture is false for all $n>7$.

This part is true (Jordan-Brouwer separation theorem):

(a) Any imbedding of the $2$-sphere into $3$-dimensional Euclidean space
separates the space into two disjoint regions.

But this part, which would seem to be a natural generalization of the Jordan-Schönflies Curve Theorem, is not true:

(b) The regions are homeomorphic to the inside and outside of the unit sphere.

Every chain of subsets of $\mathbb N$ is countable.

$i^i$ is imaginary.$\ \ \ \ \ \ $

  • If $U$ is an open subset of $\mathbb{R}^n$ that is homeomorphic to $\mathbb{R}^n$, one might think it “obvious” that it’s in fact diffeomorphic to $\mathbb{R}^n$ (perhaps thinking something like “topologically it looks like $\mathbb{R}^n$, and differentiably it’s locally trivial”). In fact, this is true (but by no means obvious!) for $n\neq 4$. But for $n=4$ it is false: there exist exotic $\mathbb{R}^4$’s (differentiable manifolds that are homeomorphic, but not diffeomorphic, to $\mathbb{R}^4$), including “small” ones which are diffeomorphic to an open subset of $\mathbb{R}^4$.

  • Much less profound, but still fun: it’s “obvious” that the sum of two convex open sets in the plane whose border is $C^\infty$ also has a $C^\infty$ border (perhaps thinking something like “the border of the sum is parametrized by a smooth function of the borders of the summands”). But this is false: in fact, the border of the sum is always $C^{20/3}$ (meaning six times differentiable and with a sixth derivative which is appropriately Hölder) and no more in general. A simple counterexample is given by the epigraphs of $x^4/4$ and $x^6/6$. For details, see Kiselman, “Smoothness of Vector Sums of Plane Convex Sets”, Math. Scand. 60 (1987), 239–252.

I really like “wrong proofs” as typically the insight why the proof is wrong gives you some understanding of the topic. One very simple version is this one, which I threw at my first semesters when I was a tutor:

Each binary relation which is symmetric and transitive is also reflexive and therefor an equivalence relation.


Let $\sim$ denote a symmetric and transitive relation and let $x$, $y$ be two elements with $x \sim y$. As $\sim$ is symmetric, it holds that $y \sim x$. Since $x \sim y$ and $y\sim x$ it follows by the transitivity of $\sim$ that $x \sim x$, which is the definition of reflexivity.

Edit: Since I was asked, here’s why the proof is wrong (move your mouse there to show):

Take a look at the empty relation on a non-empty set $S$, so that there are no $x, y \in S$ so that $x \sim y$. This relation is symmetric and transitive, but it is not reflexive. Reflexivity needs $x \sim x$ to hold for all $x$. The proof assumes that there is a y so that x ~ y, which isn’t necessarily the case for all $x$.

Theorems that are intuitively true, but actually flawed:

  • There is no continuous, nowhere-differentiable real function.

  • There is no real function that is differentiable and not monotonic on any non-trivial interval.

  • If a real function satisfies $\forall x, y, f(x+y) =f(x) +f(y) $, it is of the form $x\to ax$.

  • Infinite sums and integrals can be swapped anytime.

  • A connected metric space is path-connected.

There are a good number of counterintuitive probability situations out there. One of my favorites is nontransitive dice:

There are 3 dice, A, B and C. The dice have numbers from 1-9 on their sides (repeats possible). If die B beats (higher number) die A more than half the time and die C beats die B more than half the time, then die C will beat die A more than half the time.

This is not necessarily a true statement. Dice can be designed such that the “x beats y” property is not transitive. A beats B, which beats C, which beats A.

The following statement I once believed to be “obvious”:

If $f:\mathbb{R} \rightarrow [0,\infty)$ continuous is such that $\int_{-\infty }^{\infty }f(x)\text{d}x<{\infty } $, then $\lim \limits_{x \to \pm\infty} f(x) = 0$

which is actually false.

(Note: It is true if $f$ is uniformly continuous!)

The real numbers/Cantor set are countable.

There are several false “obvious” proofs:

  1. “Proof”. Consider the tree $\{0,\ldots,9\}^\Bbb N$, then every real number corresponds to a node in the tree. Since there are only countably many levels and each is finite, it follows that the real numbers are finite.

    Why does it fail? This set is actually not a tree. You can order it so it looks like a tree, but in fact the tree would be composed of initial segments of each functions ordered by continuation. This tree, then, would have a last level (namely a level that no point there has a successor), and it would be exactly the level of the functions themselves (the previous levels would be proper initial segments of the functions).

    If we remove that last level, then the tree is indeed countable, but now each real number corresponds to a branch in the tree rather than a node. (It’s the unique branch whose limit equals to the function, which previously appeared on that final level.)

  2. “Proof”. The rational numbers are countable, and between every two real numbers there is a rational number. Therefore this defines a bijection between pairs of real numbers and the rational numbers.

    Why does it fail? Because there are many, many, many pairs being mapped to the same rational number, this is not actually a bijection.

  3. “Proof”. The Cantor set is closed, its complement is open, so it is a countable union of intervals, so the Cantor set is countable.

    Why does it fail? Because not every point in the Cantor set is an endpoint of such interval. For example $\frac14$. It is true that the endpoints of these intervals form a countable dense subset, though.

  4. BONUS!, $\mathcal P(\Bbb N)$ is countable.

    “Proof”. For every finite $n$, $\mathcal P(n)$ is finite, and $\mathcal P(\Bbb N)=\mathcal P(\bigcup n)=\bigcup\mathcal P(n)$, is a countable union of finite sets, which is countable.

    Why does it fail? Because the union only includes finite subsets of $\Bbb N$, but none of its infinite subsets.

Here’s one of my favorites: Let’s assume playing with a fair coin.

Theorem (false)
In a long coin-tossing game each player will be on the winning side for about half the time, and the lead will pass not infrequently from one player to the other.

The following is from the classic of Chung & Feller Introduction to Probability Theory and It’s Applications, Vol 1:

According to widespread beliefs a so-called law of averages should ensure the Theorem above. But, in fact this theorem is wrong and contrary to the usual belief the following holds:

With probability $\frac{1}{2}$ no equalization occurred in the second half of the game regardless of the length of the game. Furthermore, the probabilities near the end point are greatest.

In fact this leads to the Arc sine law for last visits (see e.g. Vol 1, ch.3, section 4, Theorem 1).

Note: Please note their remarkable statements cited from Chapter III: Fluctuations in Coin Tossing and Random Walks:

(Chung & Feller): For example, in various applications it is assumed, that observations on an individual coin-tossing game during a long time interval will yield the same statistical characteristics as the observation of the results of a huge number of independent games at one given instant. This is not so.

and later on:

(Chung & Feller): Anyhow, it stands to reason that if even the simple coin-tossing game leads to paradoxical results that contradict our intuition, the latter cannot serve as a reliable guide in more complicated situations.

[2015-07-16] According to a comment from @HenningMakholm some examples exposing striking aspects.

  • Suppose that a great many coin-tossing games are conducted simultaneously at the rate of one per second, day and night, for a whole year. On the average, in one out of ten games the last equalization will occur before $9$ days have passed, and the lead will not change during the following 356 days. In one out of twenty cases the last equalization takes place within $2\frac{1}{2}$ days, and in one out of a hundred cases it
    occurs within the first $2$ hours and $10$ minutes.

  • Suppose that in a learning experiment lasting one year a child was consistently lagging except, perhaps, during the initial week. Another child was consistently ahead except, perhaps, during the last week. Would the two children be judged equal? Yet, let a group of $11$ children be exposed to a similar learning experiment involving no intelligence but only chance. One among the $11$ would appear as leader for all but one week, another as laggard for all but one week.

The examples above are in fact a consequence of the Arc sine law for last visits.

Hypothesis: Every infinitely-differentiable function is real-analytic somewhere.

This is false, as shown by (for example) the Fabius function.

I’m surprised noone gave this answer already, so here it is:

There are more integers than there are natural numbers.

It’s obvious, isn’t it?

The probability that you hit any single point on a dart board is $0$ but the probability that you hit the dart board is $1$ (as long as you’re not as bad as I am at throwing darts ;D).


As @JpM pointed out I didn’t follow the format of these posts albeit the idea can (easily in my opinion) be understood from what I’ve said above.

Pseudo-Claim: The probability of hitting a single point on a dart board is greater than $0$ since the probability of hitting it at all (assuming that you will hit the dart board) is $1$.

Seems obvious in the sense that a bunch of $0$ can’t add up to be $1$ so each point must have some probability. Actually false because of some properties of measures.

Image of a measure zero set under a continuous map has measure zero!

My “theorem”:

The statement Everybody loves my baby, but my baby loves nobody but me is about a pair of lovers

It is so simple and so obvious, even my grandma will understand it. And no matter how much you explain the simple logic calculation which shows that we are talking about a single narcissist here, half the class of first-semester logic students will continue insisting that your proof is wrong, and they don’t know what is wrong about it, but it cannot refer to a single person.

In my opinion, the most interesting (but also sometimes not intuitive) results in mathematics are those that state a theorem that ends up being false because it actually holds in many cases, except for very few or very strange cases. In other words, the most “obvious” false theorems to me are those that have very difficult counterexamples.

Some examples:

  • Banach-Tarski: There exists a strict subset $A$ of the Euclidean $n$-ball $B$ such that one can partition $A$ and $B$ into an equal number of further subsets that can be mapped to each other by isometries. This shows that not all sets are measurable, and that it’s possible to perform partitions that do not preserve measure.

  • Non-finiteness of differentiable structures: For $\mathbb{R}^n$ with $n = 4$, there are an uncountable number of distinct differentiable structures.

  • Divergence of Fourier series: There exists an integrable function on $[-\pi, \pi]$ whose Fourier series diverges everywhere. This is extremely unusual because for any typical function we might write down, usually its Fourier series might diverge at one or a finite number of points, but will probably converge everywhere else.

Consider a function $f:(0, \infty) \rightarrow \mathbb{R}$ that is $\mathcal{C}^\infty$ on that interval. At first glance, one might think that, if $\lim(f) = 0$ as $x \rightarrow \infty$, then $\lim(f’) = 0$ as $x \rightarrow \infty$. However, this is false. Here is but one counterexample:

$$f(x) = \frac{1}{x}\sin(x^2)$$

Further, if we add the stipulation that $f$ also be monotonic, counterexamples can still be found (though they are quite pathological).

A simple arc (homeomorphic image of the closed unit interval) in the plane has $2$-dimensional Lebesgue measure zero.

Theorem: Let $f_1(x,y)$ and $f_2(x,y)$ be two joint probability densities, each having its $x,y$ components positively correlated ($Cov_1(x,y)>0$, $Cov_2(x,y)>0$). Let $f_3=\alpha f_1 + (1-\alpha) f_2$ be the mixing density, for some $0\le \alpha\le 1$. Then $Cov_3(x,y)>0$.

In words: mixing populations preserves the correlation sign. In other words: if the average MSE male user is brighter than the mean, and if the average MSE female user is brighter than the mean, then the average MSE user is brigther than the mean. Obviously true.

False. See Simpson’s paradox.

Here are some of the false statements popping into my mind that made me raise at least one eyebrow when I first realized they were not true.

Every linear function between two vector spaces is continuous.

True only as long as the domain is finite-dimensional. If it is not, then there exists a linear function that is not continuous—at any point!

The set of real numbers can in no way be (totally) ordered in such a way that every non-empty set in it has a least element.

False if choice is assumed, by the well-ordering theorem.

$\mathbb Q$ is not countable.

I am still tempted to believe it sometimes…

If the derivative of a continuous real-to-real function exists almost everywhere and (wherever it exists) vanishes almost everywhere, then the function must be constant.

False. In fact, there exists a function that satisfies the premise and it is strictly [sic!] increasing!

Any compact set is closed.

The name “compact” would suggest this, but this can be guaranteed only in Hausdorff spaces.

A set is compact if and only if every sequence in it contains a convergent subsequence.

While true in metric spaces, not only is it false in some more general topological spaces, but also neither condition implies the other!