Alternatives to show that $|\mathbb{R}|>|\mathbb{Z}|$

Cantor’s Diagonal Argument is the standard proof of this theorem. However there must be other proofs, what are some of these proofs?

I am asking this because whenever I think of this question, I immediately think of the Cantor’s Argument, which kills the possibility of other interesting finds.

Solutions Collecting From Web of "Alternatives to show that $|\mathbb{R}|>|\mathbb{Z}|$"

picakhu: This was posted on MO and there were some interesting answers. I believe the answer I gave there avoids any hidden use of diagonalization. Other answers there are interesting and useful to think about even if, at the end, a diagonal argument is lurking.

Below is an abridged version of my answer there. It assumes some knowledge of ordinals.

In a more general fashion, the question can be restated as:

Is there a proof of Cantor’s theorem that ${}|X|<|{\mathcal P}(X)|$ that is not a diagonal argument?

I learned the following in A. Kanamori-D. Pincus. “Does GCH imply AC locally?”, in Paul Erdős and his mathematics, II (Budapest, 1999), 413-426, Bolyai Soc. Math. Stud., 11, János Bolyai Math. Soc., Budapest, 2002. It dates back to Zermelo’s 1904 well-ordering paper.

a. It is enough to show that there is no injection $F:{\mathcal P}(Y)\to Y$ for any set $Y$.

b. If there is such an injection $F$ we can (definably from $F$) produce a pair $(A,B)$ with $A\ne B$ and $F(A)=F(B)$. In effect, Zermelo proved that:

For any $F:{\mathcal P}(Y)\to Y$ there is a unique a unique well-ordering $(W, \lt)$ with $W\subseteq Y$ such that:

1. $\forall x\in W (F (\{y ∈ W \mid y \lt x\}) = x)$, and
2. $F (W )\in W$.

We can then take $A=W$ and $B=\{y\in W\mid y\lt F(W)\}$.

c. Zermelo’s theorem can be proved using the (set theoretic) recursion theorem and Hartogs theorem that for any set $A$ there is a least ordinal $\alpha$ does not inject into $A$. Neither uses diagonalization.

Cantor’s diagonal argument shows that a given function cannot be surjective.

One can construct the Cantor set and thus show $|P(\mathbb{N})| \le |\mathbb{R}|$ and from Cantor’s theorem it follows that $|\mathbb{N}| < |P(\mathbb{N})|$.

(of course, I’ve been using the fact $|\mathbb{N}| = |\mathbb{Z}|$ freely here)

Edit:
Do note that this way doesn’t really use any implicit diagonalization argument. It just uses the two facts that $|\mathbb{N}|<|P(\mathbb{N})|$ and that $|P(\mathbb{N})|\le|\mathbb{R}|$. There’s something to be said about the fact that the “natural” injection of $P(\mathbb{N})$ into the Cantor set only works for the infinite subsets and not for most finite subsets, however from Cantor’s theorem we have that the “most” of the power set is the infinite subsets anyway, and thus this is not a real problem.

I read about this proof in Raymond Smullyan’s book “Satan, Cantor and Infinity”. The proof is due to Willaim Zwicker (according to the book) and involves something called a hypergame. I’ll first mentioned the hypergame paradox (because it’s fun and can give an insight about the proof) and then write the proof.

Let’s call a game with turns a normal game if it always ends in finite turns. For example tic-tac-toe and checkers are both normal games. Now imagine the following game which I will call hypergame: In the first turn the first player names a normal game. In the second turn the second player plays the first move of the normal game named and then the players continue playing that game in turns.

The paradox arises from the question: Is the hypergame a normal game? Assume that it is and assume that we try to play hypergame: I go first and say “Let’s play hypergame” (I can do that since it’s a normal game). Then it’s your turn and you can say “Let’s play hypergame” and so forth. Thus the hypergame cannot be normal since if it is we can start a hypergame that doesn’t end in finite turns. But if the hypergame isn’t normal then every hypergame will end in finite turns (since every game I will name will end in finite moves and our game will last that number of turns plus one) which will make the hypergame normal!

Now the proof: Assume that a function $f:\mathbb{N}\to\mathcal{P}(\mathbb{N})$ and let’s assume that $f$ sends $n$ to $X_n$. We define a path of numbers as a finite or infinite sequence of numbers such that an element $n$ of the path is either the last (and the path is finite) or the successor of $n$ belongs in $X_n$. For example take a number $n$. If $m\in X_n$ then a successor of $n$ in a path can be $m$. If $X_n$ is empty then $n$ is always the last element of a path. Now we call an element $n\in\mathbb{N}$ normal if every path that begins with $n$ is finite.

Finally take the set $Z$ that contains exactly all the normal elements of $\mathbb{N}$. It’s easy to see that there doesn’t exist an $n_0$ such that $Z=X_{n_0}$, which would mean that $f$ is not onto. The argument is essentially the same as the one I used above: Assume that $Z=X_{n_0}$. If $n_0\in Z$ then there is an infinite path, namely $(n_0,n_0,n_0,\ldots)$, but $Z$ is the set of all normal elements so that’s impossible. If $n\notin Z$ then there is an infinite path that begins with $n_0$ (by the definition of $Z$). Let’s call it $(n_0,m,k,\ldots)$. But then there is an infinite path that begins with $m$ (take the aforementioned path and remove $n_0$ from the start). This is again impossible since $m\in Z$ (by the definition of a path) and every element of $Z$ is normal.

This proof uses self-reference but doesn’t seem to use the diagonal lemma. I don’t know if it’s hidden somewhere in the proof (though it looks to me like it isn’t).

Cantor had several other proofs before coming up with the diagonalization argument. (See http://en.wikipedia.org/wiki/Cantor%27s_first_uncountability_proof for one version. Or https://mathoverflow.net/questions/23953/earliest-diagonal-proof-of-the-uncountability-of-the-reals.)
I also read a recent account of this in a Math Monthly article. (I think a version of that article is at http://www.math.jhu.edu/~wright/Cantor_Pick_Phi.pdf.)

The “Nested Intervals” proof is really very nice. Assume the reals are countable and so arrange them as a sequence. Then use that ordering to create intervals $[a_n, b_n]$ where $a_{n-1}<a_n<b_n <b_{n-1}$. The intersection of this sequence of nested intervals is nonempty (this is the Nested Intervals theorem) yet no real number in our list can be inside it! This contradiction proves that we could not have originally arranged the reals in a sequence.