Combinatorial interpretation of Binomial Inversion

It is known that if $f_n = \sum\limits_{i=0}^{n} g_i \binom{n}{i}$ for all $0 \le n \le m$, then $g_n = \sum_{i=0}^{n} (-1)^{i+n} f_i \binom{n}{i}$ for $0 \le n \le m$. This sort of inversion is called binomial inversion, for obvious reasons.

Many nice elegant proofs exist (my favorite uses exponential generating functions of $f_n$ and $g_n$), and also many applications (such as proving that if polynomial $f$ of degree $n$ assumes integers values on $0,1,\cdots,n$, then $f(i) \in \mathbb{Z}$ for all integers $i$).

What I’m interested is the following:

  1. A nice inclusion-exclusion proof – similar to interpreting Möbius inversion as inclusion-exclusion.
  2. If $f_0 = g_0 = 0$ and $i | f_i$ for $i>1$, we get, by the Binomial Inversion, that $i | g_i$ (reason: $i\binom{n}{i} = n\binom{n-1}{i-1}$). Is there a nice combinatorial interpretation of this phenomena? Nice applications?
  3. Are there any more famous/cool inversions (I know of Möbius inversion, binomial inversion, and the discrete derivative inversion $a_i \to a_{i+1}-a_{i})$?

Solutions Collecting From Web of "Combinatorial interpretation of Binomial Inversion"

These kinds of inverse relations are equivalent to orthogonal relations between sets of numbers.

Suppose you have two triangular sets of numbers $a_{n,k}$ and $b_{n,k}$, each defined for $k = 0, 1, \ldots, n$, such that $$\sum_{k=m}^n b_{n,k} a_{k,m} = \delta_{nm}.$$ Then $a_{n,k}$ and $b_{n,k}$ are orthogonal, and they have the inverse property you are asking for in (3); i.e., if $f_n = \sum_{k=0}^n a_{n,k} g_k$ then $g_n = \sum_{k=0}^n b_{n,k} f_k$, and vice versa.

Proof: $$\sum_{k=0}^n b_{n,k} f_k = \sum_{k=0}^n b_{n,k} \sum_{m=0}^k a_{k,m} g_m = \sum_{m=0}^n \left(\sum_{k=m}^n b_{n,k} a_{k,m}\right) g_m = g_n.$$

Thus binomial inversion follows from the “beautiful identity” $$\sum_{k=m}^n (-1)^{k+m} \binom{n}{k} \binom{k}{m} = \delta_{nm}.$$

Since the orthogonal relation and the inverse relation are equivalent, perhaps the proof of this identity given by Aryabhata or the proof by Yuval Filmus can be considered a combinatorial proof of the inverse relation you describe for binomial coefficients.

Other examples

The Lah numbers $L(n,k)$ satisfy $$\sum_{k=m}^n (-1)^{k+m} L(n,k) L(k,m) = \delta_{nm},$$ and so, like the binomial coefficients, are (up to sign) self-orthogonal and have the inverse relation
$$f_n = \sum_{k=0}^n L(n,k) g_k \Leftrightarrow g_n = \sum_{k=0}^n (-1)^{k+n} L(n,k) f_k.$$

The two kinds of Stirling numbers, $\left[ n \atop k \right]$ and $\left\{ n \atop k \right\}$, are orthogonal, satisfying
$$\sum_{k=m}^n (-1)^{k+m} \left[ n \atop k \right] \left\{ k \atop m \right\} = \delta_{nm}$$
$$\sum_{k=m}^n (-1)^{k+m} \left\{ n \atop k \right\} \left[ k \atop m \right] = \delta_{nm}.$$
Thus they satisfy the inverse relation
$$f_n = \sum_{k=0}^n \left[ n \atop k \right] g_k \Leftrightarrow g_n = \sum_{k=0}^n (-1)^{k+n} \left\{ n \atop k \right\} f_k.$$

John Riordan wrote a paper “Inverse Relations and Combinatorial Identities” (American Mathematical Monthly 71 (5), May 1964, pp. 485–498) and devoted two of the six chapters of his text Combinatorial Identities to these kinds of inverse relations. For example, he shows how inverse relations can be derived from Chebyshev polynomials (since they are orthogonal) and Legendre polynomials (since they are also orthogonal). See the article or the book for many more examples.

Additional comments and consequences

The proof using the orthogonal relation can also be applied with respect to the upper index to obtain inverse relations based on the upper index rather than the lower. Thus, for example, we also have (provided, of course, that the sums converge) $$f_n = \sum_{k=n}^\infty \binom{k}{n} g_k \Leftrightarrow g_n = \sum_{k=n}^\infty (-1)^{k+n} \binom{k}{n} f_k,$$
as well as the same kind of thing for the Lah numbers, Stirling numbers, and the other examples.

In addition, these orthogonal relations mean that matrices consisting of orthogonal numbers are inverses. Thus, for example, if $A$ and $B$ are $n \times n$ matrices such that $A_{ij} = \binom{i}{j}$ and $B_{ij} = (-1)^{i+j} \binom{i}{j}$ then the orthogonal relationship implies that $AB = I$. This, of course, means that $BA = I$ as well, and so every orthogonal relationship goes both ways; i.e., $$\sum_{k=m}^n b_{n,k} a_{k,m} = \delta_{nm} \Leftrightarrow \sum_{k=m}^n a_{n,k} b_{k,m} = \delta_{nm}.$$
For more on inverse matrices consisting of combinatorial numbers, see my answer to the question “Stirling numbers and inverse matrices.”

This identity, Möbius inversion, and the “fundamental theorem of discrete calculus” are all special cases of Möbius inversion on a poset.

  • In this identity the relevant poset is the poset of finite subsets of $\mathbb{N}$.
  • In classical Möbius inversion it’s the poset $\mathbb{N}$ under division.
  • In discrete calculus it’s the poset $\mathbb{Z}_{\ge 0}$ under the usual order.

This beautiful theory was first described by Gian-Carlo Rota in On the Foundations of Combinatorial Theory I, and elaborated on in many other papers. If you like exponential generating functions, you will love On the Foundations of Combinatorial Theory VI. These papers were quite a revelation to me a few years ago and I hope you will enjoy them as well!

A modern reference is Stanley’s Enumerative Combinatorics Vol. I, Chapter 3 (incidentally I believe Stanley was a student of Rota’s). Here is the appropriate specialization of the general result.

Proposition: Let $g : \mathcal{P}_{\text{fin}}(\mathbb{N}) \to R$ be a function from the poset of finite subsets of $\mathbb{N}$ to a commutative ring $R$. Let

$$f(T) = \sum_{S \subseteq T} g(S).$$


$$g(T) = \sum_{S \subseteq T} (-1)^{|S| – |T|} f(S)$$

where $|S|$ denotes the size of $S$. (The special case you state is for when $g$ only depends on $|S|$.)

Proof. This is actually a slightly more general form of inclusion-exclusion. It suffices to exchange the order of summation in

$$\sum_{S \subseteq T} (-1)^{|S| – |T|} \sum_{R \subseteq S} g(R) = \sum_{R \subseteq T} g(R) \sum_{R \subseteq S \subseteq T} (-1)^{|S| – |T|}$$

and observe that the resulting coefficient is equal to $(1 – 1)^{|R| – |T|}$, which is equal to $0$ unless $R = T$ in which case it is equal to $1$. Of course I can mumble my way through the standard inclusion-exclusion proof as well: start with $f(T)$, which has too many terms, so subtract the terms $f(T – \{ t \})$, but we’ve subtracted the other terms too many times, so add back the terms $f(T – \{ t_1, t_2 \})$…

A fundamental observation about Möbius functions in general is that they are multiplicative under product of posets. The posets I’ve named above are all products of chains, so one can quickly compute their Möbius functions simply by computing the Möbius function of a chain.