Convergence of Integral Implies Uniform convergence of Equicontinuous Family

Let $\{f_n\}$ be an equicontinuous family of functions on $[0,1]$ such that each $f_n$ is pointwise bounded and $\int_{[a,b]} f_n(x)dx \rightarrow 0$ as $n\rightarrow \infty$, for every $ 0\leq a \leq b \leq 1$.

Show $f_n$ converges to $0$ uniformly.

For this question I know that there exists a uniformly convergent subsequence $f_{n_k}$ by Arzela-Ascoli Theorem. For this uniformly convergent sequence I know $$\lim_{n\rightarrow \infty} \int_a^bf_{n_k}(x)dx = \int_a^b \lim_{n\rightarrow \infty}f_{n_k}(x)dx$$
Since the left side is zero If we assume $\lim_{n\rightarrow \infty}f_{n_k}(x)dx \neq \ 0$ for some $x\in [0,1]$ Then uniform continuity of the limit implies that it is $\neq 0$ on some interval which implies the integral is $\neq 0$ Which is a contradiction . Thus $f_{n_k}$ must converge uniformly to $0$. I don’t see how to get to $f_n$ converges uniformly to $0$ though.

Solutions Collecting From Web of "Convergence of Integral Implies Uniform convergence of Equicontinuous Family"

You essentially proved: Every subsequence of $(f_n)$ has a (sub-)subsequence which converges uniformly to zero (why?).

The first bullet point in the link I provided in a comment asserts that then the sequence $(f_n)$ itself must converge to zero.

Here’s why: Suppose $f_{n}$ does not converge uniformly to zero. Then there is an $\varepsilon \gt 0$ and a subsequence $f_{n_j}$ with sup-norm $\|f_{n_j}\|_{\infty} \geq \varepsilon$ for all $j$. This subsequence has again a convergent subsequence by Arzelà-Ascoli. Your argument shows that its limit must be zero, hence the subsequence must have uniform norm $\lt \varepsilon$ eventually, contradiction.


Here’s the abstract thing:

Let $x_n$ be a sequence in a metric space $(X,d)$. Suppose that there is $x \in X$ such that every subsequence $(x_{n_j})$ has a subsubsequence $(x_{n_{j_k}})$ converging to $x$. Then the sequence itself converges to $x$.

Edit: As leo pointed out in a comment below the converse is also true: a convergent sequence obviously has the property that every subsequence has a convergent subsubsequence.

The proof is trivial but unavoidably uses ugly notation: Suppose $x_n$ does not converge to $x$. Then there is $\varepsilon \gt 0$ and a subsequence $(x_{n_j})$ such that $d(x,x_{n_j}) \geq \varepsilon$ for all $j$. By assumption there is a subsubsequence $x_{n_{j_k}}$ converging to $x$. But this means that $d(x,x_{n_{j_k}}) \lt \varepsilon$ for $k$ large enough. Impossible!


The way this is usually applied in “concrete situations” is to show

  1. If a subsequence converges then it must converge to a specific $x$. This involves an analysis of the specific situation—this is usually the harder part and that’s what you did.
  2. Appeal to compactness to find a convergent subsubsequence of every subsequence—that’s the trivial part I contributed.