# Formal notion of computational content

In constructive mathematics we often hear expressions such as “extracting computational content from proofs”, “the constructivity of mathematics lies in its computational content”, “realizability models allow to study the computational content”, “intuitionistic double-negation forgets the computational content of a proposition”, and so on.

I was wondering if there is a formal notion of computational content (of a proposition, proof, theory…), so that the meaning of expressions such as the ones above can be made precise. If so, I would be grateful to anyone who could give me some references. Thank you.

Addendum. Let me clarify that I do understand the intuitive meaning behind the words “computational content” in the expressions above. What I’m looking for is a formal definition.

It seems to me that in this area there are indeed many notions which are left unformalized, maybe because they are often used in a broad sense, or because a formal definition is felt to be unnecessary. Examples of these notions are realizability, proofs-as-programs paradigm, propositions-as-sets paradigm, intensionality/extensionality, proof irrelevance. Formal definitions have been proposed for some of these concepts. For instance, one could say that a theory satisfies the proofs-as-programs paradigm if it is consistent with the axiom of choice and Church’s thesis, following [1]; or that a type $B$ is proof irrelevant if $x \in B, y \in B \vdash x = y \in B$ is derivable.

I’m looking for the same kind of thing for “computational content”. For instance, a formal definition of computational content of propositions in a fixed theory could look like a function from the set of valid propositions to a partially ordered set, so that we would have that the computational content of $\neg \forall x \neg P(x)$ is always less than or equal to that of $\exists x P(x)$.

[1] M. Maietti & G. Sambin. Toward a minimalist foundation for constructive mathematics. In L. Crosilla and P. Schuster, editors, From Sets and Types to Topology and Analysis: Practicable Foundations for Constructive Mathematics, number 48 in Oxford Logic Guides, pages 91–114. Oxford University Press, 2005.

#### Solutions Collecting From Web of "Formal notion of computational content"

One way of formalizing the computational content of mathematical theorems is via Weihrauch reducibility, as described in the paper “Weihrauch Degrees, Omniscience Principles and Weak Computability” by Brattka and Gherardi. An important aspect is that it measures the computational content of theorems, not the computational content of proofs.

Weihrauch reducibility gives us a partially ordered set of mathematical theorems, where the lower ones as “more computable” (or at least “less uncomputable”) than the higher ones. It is currently a very active area of research in computability theory.

Another method to formalize computational content of theorems is via Reverse Mathematics, which also gives a formal way to compare the computational strengths of different theorems of mathematics. This gives, intuitively, a coarser order than Weihrauch reducibility, but it has been a very productive setting for analyzing real theorems.

The general problem that any definition of “computational content” will encounter is that the objects in mathematics are not typically constructed in a way that is suitable for computability. So it’s always necessary to choose coding systems to represent them. But every choice of coding, every choice of computability conventions, etc. leads to a different formal definition of “computational content”.

The following elaborates on MJD’s comment on type theory and the Curry Howard correspondence.

Edit: I only noted afterwards that you seem to be familiar with type theory, so probably the things to follow are not at all new to you – I will keep them anyway as maybe someone else finds them helpful.

In constructive type theory, you deal with the syntactic entities of types, normal terms of a given type, and arbitrary well-typed terms of a given type. This triple abstracts (among other things) from the classical triple of sets, canonical elements of sets, and constructive descriptions of elements in that there is a syntactic operation of reduction/execution on terms yielding normal terms after finitely many reduction steps (strictly speaking, this is not always true; this property of type systems is called strong normalization and corresponds to consistency in logic). In this sense, describing an element of a set within the framework of type theory already includes the implementation of a provably terminating algorithm constructing that element, and in particular has “constructive content”. For example, one can introduce the type ${\mathbb N}$ of natural numbers by claiming terms $0 : {\mathbb N}$ (“$0$ is a term of type ${\mathbb N}$”) and $\text{S}: {\mathbb N}\to{\mathbb N}$ (“$S$ is a term of type ${\mathbb N}\to{\mathbb N}$, the type of functions from ${\mathbb N}$ to itself”), so that the normal terms of type ${\mathbb N}$ are precisely $0, \text{S} 0, \text{SS} 0, …$. Then, one might construct addition $+,\times: {\mathbb N}\times {\mathbb N}\to {\mathbb N}$ and multiplication by induction, and write down a term like $3 + 8\times 4$. Now, this term is of type ${\mathbb N}$ as well, but it is non-canonical: it is only by repeated application of the defining properties of $+$ and $\times$ that it reduces to a canonical element of ${\mathbb N}$; in this sense, it is rather an algorithm for the computation of an element of ${\mathbb N}$ than such an element itself. This is a trivial example, but in principle the situation is always like this. Also, note the similarity with functional programming: terms in type theory are like programs in functional languages in that their computational meaning lies is the sequence of successive syntactic manipulations they induce.

This applies to propositions and proofs as well: in type theory, a proposition are identified with the type of its proofs, and proving the proposition means exhibiting a term of its type. Again, these terms may be executed/normalized, giving computational meaning to proofs. If, for example, the proposition claims the existence of an element of a certain set (like “The set of real roots of $5$ is not empty”), then a proof of that proposition amounts to writing down a term of the set/type under consideration, which by the above includes an algorithm for the computation of the element in question. Therefore, proofs have constructive content as well.

Edit This is vague, but maybe one can view constructive type theories as the prime example of formal systems in which proofs and descriptions have computational content, and say that a ‘computational content’ of an arbitrary formal system is given by an interpretation in type theory? In [1] you cite above, they make a comment in this direction at the end of §1.4:

It appears that in recent years the simple formulation of constructive set
theory, in particular of Aczel’s CZF and its variants, has brought it to play a prominent
role in the foundations for constructive mathematics.We remark, though, that constructive
set theory fails to exhibit a genuinely constructive meaning on its own, for which
it has to rely on its interpretation in type theory.

I found that very interesting, because although omitting the law of the excluded middle is surely a necessary step towards constructively acceptable and computationally meaningful arguments, I still found it unclear that a proof of an existence statement in a system of intuitionistic first order logic would indeed give rise to some actual algorithm for the computation of the desired object. The interpretation in type theory seems to settle this.

Let me focus just on the last paragraph of the question:

…a formal definition of computational content of propositions in a fixed theory could look like a function from the set of valid propositions to a partially ordered set, so that we would have that the computational content of ¬∀x¬P(x) is always less than or equal to that of ∃xP(x).

This sounds a lot like the following construction:

A $\mathcal{P}(\mathbb{N})$ valued predicate on $X$ is a function $f : X \rightarrow \mathcal{P}(\mathbb{N})$. We can define a preorder on the set $\mathcal{P}(\mathbb{N})^X$ of $\mathcal{P}(\mathbb{N})$ valued predicates as follows. Say that $f \leq g$ if there is $e \in \mathbb{N}$ such that for every $x \in X$, and every $a \in f(x)$ we have that $\{e\}(a) \downarrow$ and $\{e\}(a) \in g(x)$ (where $\{e\}(a)$ is Turing application). It is possible to view realizability in terms of $\mathcal{P}(\mathbb{N})$-valued predicates. One can assign to every formula, $\phi$, an element $[\phi]$ of $\mathcal{P}(\mathbb{N})^X$.

This would actually give the opposite order to the one you suggested. Lower down in the preorder we have $\mathcal{P}(\mathbb{N})$ valued predicates that are “difficult to compute” (and in some sense have “more computational information”) and right at the bottom, we have the predicate which is constantly equal to the empty set (and therefore “impossible to compute”). We would have $[ \exists x \; P(x) ] \leq [ \neg \forall x \; \neg P(x) ]$ and in general this can be a strict inequality.

This construction can be generalized by replacing $\mathbb{N}$ with any partial combinatory algebra (and in fact can be generalized even further than that).

For details see chapters 1 and 2 of Jaap van Oosten, Realizability: An Introduction to its Categorical Side.