# Halmos, Naive Set Theory, recursion theorem proof: why must he do it that way?

Summary: I understand the proof Halmos gives in his Naive Set Theory for the recursion theorem, but I don’t understand why he has to do it that way. I give an alternative proof which may be flawed. If so, I want to understand why it’s flawed.

The context is the following discussion made by Halmos in Section 12 (The Peano Axioms).

Induction is often used not only to prove things but also to define things. Suppose, to be specific, that $f$ is a function from a set $X$ into the same set $X$, and suppose that $a$ is an element of $X$. It seems natural to try to define an infinite sequence $\{u(n)\}$ of elements of $X$ (that is, a function $u$ from $\omega$ to $X$) in some such way as this: write $u(0) = a,\ u(1) = f(u(0)),\ u(2) = f(u(1))$, and so on. If the would-be definer were pressed to explain the “and so on,” he might lean on induction. What it all means, he might say, is that we define $u(0) = a$, and then, inductively, we define $u(n^+)$ as $f(u(n))$ for every $n$. This may sound plausible, but as justification for an existential assertion, it is insufficient. The principle of mathematical induction does indeed prove, easily, that there can be at most one function satisfying all the stated conditions, but it does not establish the existence of such a function.

OK, fair enough. This argument is circular in that it uses the function $u$ in order to define $u$. Halmos then proves a theorem which does establish the existence of such a function:

Recursion theorem. If $a$ is an element of a set $X$, and if $f$ is a function from $X$ into $X$, then there exists a function $u$ from $\omega$ into $X$ such that $u(0) = a$ and such that $u(n^+) = f(u(n))$ for all $a \in \omega$.

He proves this by considering the class $\mathcal{C}$ of all subsets $A$ of $\omega \times X$ such that $(0,a) \in A$ and for which $(n^+, f(x)) \in A$ whenever $(n,x) \in A$. He notes that the class is not vacuous, because $\omega \times X$ itself satisfies the conditions. He then forms the intersection of all elements of $\mathcal{C}$ and proves that it is a function with the desired properties.

I have no problem understanding the proof, but I am trying to understand why it’s necessary to do it the way he did. He uses what one might call a “top-down” approach, starting with all of $\omega \times X$, and reducing it to a subset which is the desired function.

Would not the following “bottom-up” approach work? To me it just seems like a more pedantic version of the flawed induction argument, so I’m guessing that this is also wrong. But I don’t see what the problem is.

Possibly flawed proof:

For each $n \in \omega$, define $U_n = \{(n, f^n(a))\} \in \omega \times X$, where $f^n$ denotes the $n$-fold composition of $f$. My convention (and that of Halmos) is that $0 \in \omega$, and by $f^0(a)$ I mean simply $a$.

Define $U = \bigcup_{n=0}^{\infty}U_n$. Clearly $U \subseteq \omega \times X$. To prove that $U$ is a function from $\omega$ to $X$, we need to verify that for each $n \in \omega$, there is exactly one element of $U$ with $n$ in the first slot. But this is immediately clear from the construction of $U$.

It remains to verify that $U$ satisfies the indicated recursion. Since $U_0 = \{(0, a)\} \subseteq U$, we have $U(0) = a$. Moreover, given any $n \in \omega$, we have $U_n = \{(n, f^n(a)\} \subseteq U$, so $U(n) = f^n(a)$, and $U_{n^+} = U_{n+1} = \{(n+1, f^{n+1}(a)\} \subseteq U$, so $U(n+1) = f^{n+1}(a)$. But the latter is simply $f(f^n(a)) = f(U(n))$, and so $U$ satisfies the recursion.

The following question is related but not the same as mine. I understand why the method proposed in that question is flawed due to circular reasoning. Definition by Recursion: why is the existence part not (almost) obvious?

#### Solutions Collecting From Web of "Halmos, Naive Set Theory, recursion theorem proof: why must he do it that way?"

If you look closely, you’ll see that you’re defining the sequence of sets $U_n$ recursively: you can’t get $U_{n+1}$ without having $f^{n+1}(a)$, which requires that you have $f^n(a)$ — and that’s equivalent to having $U_n$. In essence you’re assuming that there is a function $\varphi$ from $\omega$ to $\omega\times X$ such that $\varphi(n)=\{\langle n,f^n(a)\rangle\}$ for each $n\in\omega$; but that’s pretty clearly equivalent to assuming the existence of the function $U$ whose existence you’re trying to prove.