# Proof of a simple property of real, constant functions.

I recently came across the following theorem:

$$\forall x_1, x_2 \in \mathbb{R},\textrm{function, } f: \mathbb{R} \rightarrow \mathbb{R}, x \mapsto y; \ |f(x_1) – f(x_2)| \leq (x_1-x_2)^2 \implies f \textrm{ is constant.}\ \mathbf{(1)}$$

I’ve been trying for some time, but the proof of $\mathbf{(1)}$ remains as elusive as ever. I’ve made two major attempts, the second of which I’ll outline here. Though, I would be glad to detail the first as well if requested, I won’t now since I think it’s mostly wrong. But for the second, this is what I have so far:

If, $\forall x_1,\ x_2,\ |f(x_1) – f(x_2)| \leq (x_1-x_2)^2$, then $f$ is continuous. This is so as $f$ is defined for all reals, $(\forall x \in \mathbb{R})\ f$ has finite limits, and each of those limits equals $f(x)$. Assume $f$ wasn’t constant, then $\exists x_1,\ x_2 \ni x_1 \neq x_2 \implies |f(x_1)- f(x_2)| > 0$. Since $f$ is continuous, there exist an infinity of such pairs, $x_1$ and $x_2$. For all such $x_1$ and $x_2$, we may construct a set, $S$, consitsting of $f(x_1)$ and $f(x_2)$ (not as pairs); since f is defined for all $x,\ S$ is “absolutely” bounded and as such has a least upper bound and and greatest lower bound, which we will denote as $\alpha_1\ = f(a_1)$ and $\alpha_2 = f(a_2)$ respectively. To show $f$ is constant, it will suffice to show that $\alpha_1 = \alpha_2$.

Does anyone see how the proof could be completed? Or even, do you think there might be a better approach? Thank you all in advance.

#### Solutions Collecting From Web of "Proof of a simple property of real, constant functions."

If we divide through by $|x_1 – x_2|$, we get

$$\left|\frac{f(x_1) – f(x_2)}{x_1 – x_2}\right| \leq |x_1-x_2|,$$

that is, the slope of the secant line between any two points is at most the distance between them. Fixing $x_1 = x$, taking $x_2 = x+h$ and letting $h$ approach zero shows that $f$ is differentiable at $x$ and $f'(x) = 0$. That is, $f’ \equiv 0$, so by the Mean Value Theorem $f$ is constant.

The proof goes through with the right hand side of your inequality replaced by $o(|x_1-x_2|)$, so in particular if there is $\alpha > 1$ and $C > 0$ such that for all $x_1,x_2 \in \mathbb{R}$, $|f(x_1) – f(x_2)| \leq C |x_1 – x_2|^{\alpha}$. If instead we take $\alpha = 1$ we get a Lipschitz continuous function. If we take $\alpha \in (0,1)$ we get a Hölder continuous function. Such functions need not be constant, but are still very nice.

And now, an anecdote: last summer my department held a “mock AMS conference” in which all summer-supported graduate students presented short talks, the more senior of them tending to talk about their thesis work in progress. One student gave an exceptionally clear and audience-friendly talk about her work on convex subsets satisfying certain smoothness conditions on the boundary. She mentioned the prospect of proving a result for Hölder continuous boundary for a certain class of exponents $\alpha \leq 1$. Casting about for a question, I decided to ask about the case of $\alpha > 1$…at which point her thesis adviser, who was sitting next to me in the audience, very politely explained the facts of life about Hölder continuous functions with exponent $\alpha > 1$. Oops!

I’ll show that $f(0)=f(1)$ and you’ll see the trick. For all $n\in \mathbb{N}$, $n > 0$:

\begin{align}
|f(1)-f(0)| &= |f(1)-f(\frac{n-1}{n})+f(\frac{n-1}{n})-f(\frac{n-2}{n}) + \dotsc + f(\frac{1}{n}) – f(0)| \newline
&\leq \frac{1}{n^2} + \frac{1}{n^2} + \dotsc + \frac{1}{n^2} = \frac{1}{n}
\end{align}

(There are $n$ terms in that sum.) Since this holds for all positive $n$ it follows that $f(1) = f(0)$.

For me, the intuition behind this is that you find a single pair $f(a)$ and $f(b)$ such that $a \ne b, f(a) \ne f(b)$. You have
\begin{align}
\left|f(a) – f(b)\right| \le (a-b)^2.
\end{align}

Given the midpoint $m$ between $a$ and $b$, you also have
\begin{align}
\max\lbrace\left|f(a) – f(m)\right|, \left|f(m) – f(b)\right|\rbrace \le (a-m)^2.
\end{align}

The left side is at least twice as small as the left side of the original equation, and the right side is exactly four times as small. Repeatedly subdividing, eventually the equation must be false.