An easy example of a non-constructive proof without an obvious “fix”?

I wanted to give an easy example of a non-constructive proof, or, more precisely, of a proof which states that an object exists, but gives no obvious recipe to create/find it.

Euclid’s proof of the infinitude of primes came to mind, however there is an obvious way to “fix” it: just try all the numbers between the biggest prime and the constructed number, and you’ll find a prime in a finite number of steps.

Are there good examples of simple non-constructive proofs which would require a substantial change to be made constructive? (Or better yet, can’t be made constructive at all).

Solutions Collecting From Web of "An easy example of a non-constructive proof without an obvious “fix”?"

Some digit occurs infinitely often in the decimal expansion of $\pi$.

Claim: There exist irrational $x,y$ such that $x^y$ is rational.

Proof: If $\sqrt2^{\sqrt2}$ is rational, take $x=y=\sqrt 2$. Otherwise take $x=\sqrt2^{\sqrt2}, y=\sqrt2$, so that $x^y=2$.

There is a standard kind of example here. Let $T$ be any open question that requires substantial work to resolve. Consider:

There is a natural number $n$ such that $n = 0$ if and only if $T$.

That statement is clearly true, via a “simple” non-constructive proof: if $n = 0$ is not an example, then $n = 1$ is an example. But if we could produce an explicit example, then we would know whether $T$ holds.

A key point is that the proof does not require the axiom of choice, advanced mathematics, or anything other than ordinary classical logic. The nonconstructivity in classical mathematics is a direct consequence of the classical logic that is used. In constructive mathematics, the property that explicit examples can be constructed for provable existential theorems is called the witness property.

It is common for people to point out examples of “nonconstructive” consequences of Zorn’s lemma or the axiom of choice, but classical set theory without the axiom of choice is equally nonconstructive.

The intermediate value theorem, in the form that for a continuous function on $f:[a,b]\to\Bbb R$ with $f(a)<0<f(b)$ there exists $x\in(a,b)$ with $f(x)=0$, is not valid constructively.

See for instance here or here. The fundamental difficulty with the obvious classical proof of the IVT is that one cannot prove constructively that for $c\in(a,b)$ either $f(c)\geq0$ or $f(c)\leq0$ holds, whence one cannot decide whether to concentrate on $[a,c]$ or on $[c,b]$ in trying to localise the search for$~x$ with $f(x)=0$, and therefore no sequence of numbers converging to such an $x$ can be constructed.

Definition: A real number $x$ is called normal if for every $b>1$ the digits $0,1,2,…,b-1$ are equally distributed in the $b$-adic expansion of $x$.

Theorem: Normal numbers exist.

The standard proof is non-constructive as it proceeds by showing that the set of non-normal numbers has Lebesgue-measure zero, hence must have non-empty complement.

I don’t know if the existence of a computable normal number can be proved, and if yes, if that proof can be made constructive in the sense that one can explicitly write down an algorithm for the construction of a normal number. I’d be most interested in hearing about such a thing in case anyone knows more about it. Until then, I’m not convinced that normal numbers exist in a real sense, but only that they are unavoidable artefacts in a certain logical and set theoretical framework for the real numbers.

Many existence theorems related to algebraically closed fields are inherently non-constructive.

1) Every field has an algebraic closure. Try to make this explicit for $\mathbf C(x)$ or for $\mathbf Q_p$.

2) Any two algebraically closed fields with the same characteristic and uncountable cardinality are isomorphic as fields. To take a special case, any algebraically closed field of characteristic $0$ with the same cardinality as $\mathbf C$ is isomorphic to $\mathbf C$ as a field. This comes up in number theory, where the algebraic closure of $\mathbf Q_p$ is isomorphic to $\mathbf C$. Good luck constructing the isomorphism.

3) The field automorphism of $\mathbf Q(\sqrt{2})$ that sends $\sqrt{2}$ to $-\sqrt{2}$ extends (in many ways) to a field automorphism of $\overline{\mathbf Q}$. The problem is that the only explicit automorphisms of $\overline{\mathbf Q}$ that can be written down are the identity and complex conjugation, neither of which has the desired effect on $\sqrt{2}$.

Baire’s category theorem can be used to establish non-constructive existence proofs in a parsimonious and elegant way.

Theorem (Baire)$\phantom{—}$No complete metric space is a union of countably many nowhere dense subsets.

As Folland (1999, p. 162) puts it, “[Baire’s] category theorem is often used to prove existence results: One shows that objects having a certain property exist by showing that the set of objects not having the property (within a suitable complete metric space) is [a countable union of nowhere dense sets]. For example, one can prove the existence of nowhere differentiable continuous functions in this way.”

In some instances, Baire’s category theorem helps establish non-existence results. For example, it implies that the dimension of every Banach space is either finite or uncountable. That is, there exist no Banach spaces of countably infinite dimension.

You can recognize any minor-closed graph class in cubic time (Robertson–Seymour theorem), but it might be undecidable to actually find such an algorithm.

Since the minor-ordering is a wqo, any minor-closed graph class $\mathcal{G}$ has a finite set of forbidden minors $\mathcal{H}$. Hence, if we want to check whether a graph $G$ is in $\mathcal{G}$, we simply check for every graph $H \in \mathcal{H}$ whether $H$ is a minor of $G$. This can be done in cubic time. However, finding $\mathcal{H}$ may not be possible.

Some graph classes that can be recognized in cubic time: Graphs of bounded genus, graphs with bounded treewidth, pathwidth and treedepth.

In cryptography there are many times when I wish to prove that I have some private key, but need to make it impossible (or really difficult) for you to construct that key yourself.

Take factoring, the basis of RSA.

Claim: There exist integers P and Q such that P * Q = 651041

Proof:

  1. If 651041 is prime, then 2^651040 mod 651041 = 1
  2. 2^651040 mod 651041 = 436566
  3. Thus 651041 is composite.

Certainly you could construct P and Q by trial division, but it could take you a while (by hand). Do you measure the difficulty of a constructive proof in terms of the amount of space it takes to write down, or the length of time it would take to run it?

Do you allow for probabilistic proofs? If so, you might also consider looking at the idea of Zero Knowledge Proofs:
http://en.wikipedia.org/wiki/Zero-knowledge_proof

The zero knowledge proof I remember best deals with graph coloring.

I claim that a particular graph G has a valid k-coloring.

Steps:

  1. (Before making my claim) I generate graph G and its coloring myself in such a way as to guarantee that it has a valid k-coloring.
  2. I claim to you that G has a valid k-coloring.
  3. I randomly shuffle the colors in my coloring (Ex. All red nodes become blue, all blue nodes become green and so forth).
  4. I send you the shuffled colors of all the nodes of the graph, but I encrypt each node’s color with a separate key.
  5. You pick a single edge of the graph, and I send you the keys to decrypt the colors on the two nodes of that edge.
  6. When you decrypt the colors, you can verify that my coloring is correct by seeing that the decrypted colors don’t match.

We repeat steps 3-6 until you believe that I really do have a valid coloring of the graph. Thus I have proved that there exists a valid coloring of graph G without getting you any closer to constructing a valid coloring yourself.

There are reasonably-natural games (two player, perfect information) which we can prove are won by Player $1$, but for which no winning strategy is known; or, at least, for which it is not obvious how to transform the classical proof that Player $1$ wins into a constructive proof.

The classic example is Chomp: we start with an $m\times n$ square grid. On each turn, a player removes some tile, along with every tile below and to the left of that tile. The player who takes the top-right square loses.

Since this is a finite game, one player or the other must have a winning strategy. And player $2$ can’t have a winning strategy, by a strategy stealing argument: Imagine player $1$ ate the bottom-left square, and player $2$’s winning response was to eat square $\alpha$. Then player $1$ could have just eaten square $\alpha$, and then pretended for the rest of the game that they were player $2$, and force the real player $2$ to lose by playing their own strategy against them.

Although this is perfectly valid classically, it’s highly non-constructive.


Actually, determinacy statements in general might push the bounds of constructivity. The usual proof of open determinacy is to assume Open has no winning strategy, and have Closed simply play to preserve Open’s lack of a winning strategy. This is impredicative, though (and this is backed up by the fact that computable open games need not have hyperarithmetic winning strategies), and I suspect open determinacy is not true in most constructive theories. (But I could very easily be wrong.)

Uncountability of the real numbers. Since the set of all finitely-describable numbers (in any formal language) is countable, by definition uncountability implies the existence of numbers that are not finitely describable.

To illustrate my point, here are some examples of finitely-describable numbers:

  • Any natural number or integer
  • Any rational number
  • Any algebraic number, defined for this purpose as a solution to an arbitrary
    polynomial equation with rational coefficients
  • Any computable number
  • Any number which can be defined by an infinite series of finite terms, such as pi, e, etc
  • Any number definable as a complement or arbitrary Boolean combination of computable sets, eg the sequence of digits representing the halting set, or Chaitin’s constant
  • Any solution to an arbitrary finite first-order predicate
  • Any solution to a finitely-generated infinite list of first-order predicates
  • etc

So, any number we can describe using a finite number of symbols is (by definition) a member of a well-defined countable subset of the reals (the set of numbers definable in the language being used). So, even though there are infinitely more “undefinable” than “definable” reals, we cannot construct an “undefinable” real.

My favourite one: $[0, 1]$ is compact, i.e. every open cover of $[0, 1]$ has a finite subcover.

Proof: Suppose for a contradiction that there is an open cover $\mathcal{U}$ which does not admit any finite subcover. Thus, either $\left[ 0, \frac{1}{2} \right]$ or $\left[ \frac{1}{2}, 1 \right]$ cannot be covered with a finite number of sets from $\mathcal{U}$ – name it $I_1$. Again, one of the two $I_1$’s subintervals of length $\frac{1}{4}$ can’t be covered with a finite number of sets from $\mathcal{U}$. Continuing, we get a descending sequence of intervals $I_n$ of length $\frac{1}{2^n}$ each, every of which cannot be finitely covered.

By the Cantor Intersection Theorem,

$$\bigcap_{n=1}^{\infty} I_n = \{ x \}$$

for some $x \in [0, 1].$ But there is such $U \in \mathcal{U}$ that $x \in U$ and so $I_n \subseteq U$ for some sufficiently large $n$. That’s a contradiction.

But given an arbitrary cover $\mathcal{U}$, I think finding a finite subcover may be a somewhat tedious task. :p

P.S. There actually comes a procedure from the proof above:

  1. See if $[0, 1]$ itself is covered by one set from $\mathcal{U}$.
  2. If so, we’re done. If not, execute step 1. for $\left[ 0, \frac{1}{2} \right]$ and $\left[ \frac{1}{2}, 1 \right]$ to get their finite subcovers, then unite them.

The proof guarantees you will eventually find a finite subcover (i.e. you’ll never end up going downwards infinitely), but you cannot tell how long it will take. So it is not as constructive as one would expect.

I think that there are different kinds of examples, some cited in the other answers. To see the difference I give three examples.

1) The number $a$ so defined: $a=1$ if Goldbach’s Conjecture ($G$) is provable in ZFC, $a=0$ if $\neg G$ is provable in ZFC, a=2 if $G$ is undecidable in ZFC. The value of this number is not known today, but it can eventually become known when someone solves the conjecture.

2) Chaitin’s constant is an example of a number that is well defined but not computable so nobody will never know it.

3) Let $\gamma$ be a real number defined as, say: $\gamma= 12.23873560031\cdots$ and define:
$c=3*10^2+3*10^5+3*10^{10}+\cdots$ where we have chosen the digit $3$, but it could be any digit, and the exponents of $10$ are the positions of this digit in the decimal expansion. Then $c$ may be a number or not, depending on whether this expansion has a finite number of $3$s in its expansion.
If we chose $ \gamma= \zeta(5)$ (the Riemann zeta function), we don’t know today if it has a finite number of digits $3$ (so far as I know), so we don’t know if $c$ is a number or not. If someone proves that $\zeta(5)$ actually has a finite number of $3$ in its expansion then we will have two possibilities:
The proof is not be constructive, i.e. we cannot find the position of the last $3$, and $c$ is really an integer number, so it is computable but we can never find it. Or the proof is constructive and we know the position of the last $3$ and we can actually find the number $c$.

How about Cantor’s proof that transcendental numbers exist? (Because the set of algebraic numbers is countable, and the set of real numbers is uncountable.) If I remember correctly, this proof appeared only soon after Liouville had provided an explicit construction (namely $\sum 10^{-n!}$).

I know this example is hinted by other answers given, but I think it is worth stating explicitly.