What is a type?

In terms of mathematical logic can someone give me a definition of the word ‘type’ along with a few examples of the word being used.

The book I am reading defines it as a “representative of inscriptions that look the same and utterances that sound the same”, I find this kind of vague, and I can’t understand the definition given by wikipedia.

Solutions Collecting From Web of "What is a type?"

Looking at page 3 of the book linked, the author is using “type” in the philosophical sense, not the mathematical sense. This “type-token distinction” is a topic that gets a lot of attention in philosophy, but most books on mathematical logic don’t discuss it, because we can get by for a long time without worrying very much about it, and because the time it takes to discuss it properly would take time and energy away from the mathematics.

The general idea is that there is some collection of abstract “types” of things. For example, there is a type of “bicycles” and a type of “apples”. Then there are specific bicycles – each of which is a “token” of the type of bicycles, and there are specific apples, which are tokens of the type of “apples”. This should sound vaguely like something from Plato.

Similarly, there is a type of “the string 101”. Each time I write down “101”, I write down only a token of the string. So 101, 101, and 101 are three different tokens of the same string.

In philosophy, the question “what is a type, really?” has received a lot of discussion, as you might imagine. But that question is not particularly important to the direct study of mathematics, we can do a lot of math without ever worrying about it.

One place that the type/token distinction does arise in logic is with occurrences of variables. Consider the sentence of first-order logic
R(x) \land (\exists x) [P(x)]
Here there is only one variable, “x” – so a variable is a type. There are two occurrences (tokens) of the variable, though (three if we count “$(\exists x)$”). One of them is in the scope of a quantifier, and the other is not. This distinction between “variables” and “occurrences of variables” is quite important for understanding the concept of free and bound variables.

In the field of type theory (and computer programming) there is a somewhat related idea of the type of a variable. In these systems, every variable has a “type” associated with it, and the type of the variable dictates what can be done with the variable. For example, in one system I have a type of natural numbers and a type of sets of natural numbers. If I have two numbers, I can add them. If I have a number and a set, I can ask whether the number is an element of the set. But, in this typed system, I cannot add two sets or ask whether a number is an element of another number, because those operations don’t match those types. Most first-order logic is done in a non-typed way (so there is only one type of variable).

So do you need to worry about the type-token distinction for propositional logic as a student learning it for the first time? If you are interested in the philosophical aspects of the subject, then you certainly do. If you are only interested in the purely mathematical relationships between the formulas of the logic, and less about the applications of the logic to philosophy, then you may be able to get by, as many texts do, using some common sense but otherwise not spending too much time distinguishing types from tokens.

I think that the author may refer to an ontological concept of type-token. Perhaps you could find more on this link.

In computer languages like C or Haskell, you’re only allowed to perform operations that make sense for the type. So, you could append a letter to a string of letters “hello”+’s’=”hellos”, but unless you define what it means to, you wouldn’t be able to append ‘s’ to a number, and indeed you might not expect + to carry the semantics of appending something.

The model theory version of a type is essentially a duck typing system, like you find in Ruby – if it looks like a string, and you can append strings to it like a string, then it’s a string. To do this you need a way to build a type’s set of behaviors, which is implemented as a set of first-order formulas.

EDIT: I’ve thought of a programming metaphor for the type/token distinction, as well.

Object-oriented programming is based around defining classes, which are types in the previous senses of the word, to use as a template for creating tokens of that type. For example, in games where many of the same enemy can exist at once, you could define a class for that enemy and instantiate it many times, thus creating many enemies all of the same type. But the difference between the type and the tokens is one can have the type “string”, and an actual string, like “hello world,” and you would check the type of “hello world”, not the token itself, when making sure you don’t set a character’s position in space to “hello world.” Same goes for removing all (tokens) of one type of enemy from the game – which doesn’t remove the type itself from the game.

Actually, the central logical place, where the notion of type is present is the realm of higher-order logics, since these logics are usually developed as extensions of type-theories (, which need not be logics). In this higher-order context types are intuitively best understood as sorts of interpretable expressions, that determine the kind of semantic objects assigned to expressions of these sorts.

Precise definitions of types in a general format are difficult to give, so I will try to convey some precision by outlining (the beginnings of) Church style (extensional) higher-order logic, which extends Church’s simple theory of types. Let the set of types, $T$, be the smallest set containing some set of simple types (usually represented as $e$ and $t$) which is closed under the construction of functional types $\sigma \rightarrow \tau$ (functional types $\sigma \rightarrow \tau$ are interpreted as functions from things of type $\sigma$ to things of type $\tau$). We assume that the vocabulary contains, besides the logical constants $=$ and $\lambda$, an denumerably infinite list of variables of type $\tau$ and a list of constants of type $\tau$, for all $\tau \in T$. Syntax is given in Backus-Naur form (Indices indicate types):
\alpha_\sigma ::= x_\sigma~ | ~c_\sigma~ |~ \beta_\tau = \gamma_\tau, ~\text{where}~ \sigma = t~|~ \beta_{\eta \rightarrow \sigma}(\gamma_\eta)~|~\lambda x_\eta. \beta_ \rho, \text{where}~ \sigma = \eta \rightarrow \rho

For some $D \not = \emptyset$ let $I_{e, D} := D; I_{t, D} := 2 = \lbrace 0, 1 \rbrace; I_{\sigma \rightarrow \tau, D} := I_{\tau, D}^{I_{\sigma, D}}$. Let a model be a pair $M=\langle D, a \rangle$ such that $a: Con_\sigma \rightarrow I_{\sigma, D}$ ($Con_\sigma := \lbrace c_\sigma: \sigma \in T\rbrace$), and an assignment (in $M$) a function $\theta: Var_\sigma \rightarrow I_{\sigma, D}$ ($Var_\sigma := \lbrace x_\sigma : \sigma \in T \rbrace$). Finally we define a denotation function $\mathcal{\Delta}_{M, \theta}$, that assigns to each $\alpha_\sigma$ a member from $I_\sigma$. Let $\Delta_{M, \theta}(c) := a(c); \Delta_{M, \theta}(x) := \theta(x); \Delta_{M, \theta} (\beta = \gamma) := 1 ~\text{iff}~ \Delta_{M, \theta}(\beta) = \Delta_{M, \theta} (\gamma); \Delta_{M, \theta}(\beta(\gamma)) := \Delta_{M, \theta}(\beta) (\Delta_{M, \theta}(\gamma)); \Delta{M, \theta}(\lambda x. \beta):=f: I_\sigma \rightarrow I_\tau ~\text{such that} f(d) = \Delta_{M, \theta \frac{d}{x}}(\beta).$ The usual truth functions and higher-order quantifiers can be introduced via Henkin style definitions.