$f'(c) \ge 0 , \forall c \in (a,b)$ then $f$ is increasing in $$ , proof of this without Mean Value theorem

Let $f: [a,b] \to \mathbb R$ be a function differentiable in $(a,b)$ , it is known that if $f'(c) \ge 0 , \forall c \in (a,b)$ then $f$ is increasing in $[a,b]$ and this can be proved by Lagrange Mean value theorem ; I would like to know , is there any other proof of this ?

Solutions Collecting From Web of "$f'(c) \ge 0 , \forall c \in (a,b)$ then $f$ is increasing in $$ , proof of this without Mean Value theorem"

Assume that there are points $c$, $d$ with $a<c<d<b$ such that
$${f(d)-f(c)\over d-c}=-p<0\ .$$
I claim that there is a point $\xi\in[c,d]$ with $f'(\xi)\leq-p$.

Proof. Using binary division we can find an increasing sequence $(c_n)_{n\geq0}$ and a decreasing sequence $(d_n)_{n\geq0}$ with $$c\leq c_n<d_n\leq d,\qquad d_n-c_n={d-c\over2^n}\qquad(n\geq0)\ ,$$
such that
$${f(d_n)-f(c_n)\over d_n-c_n}\leq -p\qquad(n\geq0)\ .\tag{1}$$
The $c_n$ and the $d_n$ have a common limit point $\xi\in[c,d]$. If
$$\xi=c_n\qquad(n\geq n_0)$$ one immediately concludes from $(1)$ that $f'(\xi)\leq-p$. Otherwise we may assume $c_n<\xi<d_n$ for all $n\geq0$. Rewriting the left hand side of $(1)$ we then can say say that
$${\xi-c_n\over d_n-c_n}{f(\xi)-f(c_n)\over \xi-c_n}+{d_n-\xi\over d_n-c_n}{f(d_n)-f(\xi)\over d_n-\xi}\leq -p\qquad(n\geq0)\ .\tag{2}$$
Now let an $\epsilon>0$ be given. Then for some sufficiently large $n$ the difference quotients of $f$ in $(2)$ are $\geq f'(\xi)-\epsilon$, and this implies
$$f'(\xi)-\epsilon\leq -p\ .$$
Since this is true for all $\epsilon>0$ the claim follows.

To anyone reading this answer: it is somewhat wrong. Please check comments.

If I may assume $f'(c)>0$:

For every $x \in [a,b]$ we have an interval $I_x$ which by definition of the limit (the derivative), we have $f$ increasing on $I_x$ *. Then $\bigcup_{x \in [a,b]} I_x=[a,b]$, and since $[a,b]$ is compact we can find a finite subcover $\bigcup_{i=1,…,n} I_{x_i}=[a,b]$. $f$ is increasing on each interval $I_{x_i}$, and for any two points $p<q$ with $p,q \in [a,b]$ we can find intermediate points $r_j$ ($r_1=p$,…,$r_m=q$) such that $\forall j$, $r_j$ and $r_{j+1}$ are both in $I_{x_i}$ for some $i$. Then, $\forall j$, we have $f(r_j)<f(r_{j+1})$ and hence $f(p)<f(q)$.

*Suppose $f'(x)=c>0$. Then $\exists \delta$ such that $\frac{f(y)-f(x)}{y-x}>\frac{c}{2}>0$ for $|y-x|<\delta$. Define $I_x$ to be the interval $(x-\frac{\delta}{2},x+\frac{\delta}{2})$. Then $f$ strictly increasing on $I_x$.

Method 2, with $f'(c) \geq 0$. Every assumption used here can be proven without use of the MVT.

Simply note that $f(x)=\int_a^xf'(t) \mathrm{dt}$. For $y>x$, $f(y)-f(x)=\int_x^yf'(t)\mathrm{dt}$. Since $f'(t) \geq 0$ for $t \in [x,y]$ we have $f(y)-f(x) \geq 0$.

Well first of all this is a very good question (+1 to OP for the same). Without using MVT the proof is really tricky.

First I prove under the stronger condition $f'(x) > 0$. Let’s then assume this to be the case and prove that $f(x)$ is strictly increasing.

There are two proofs possible as far as I know:

1) Proof using Rolle’s Theorem: While MVT is not allowed here I would like to use Rolle’s in a very different way (so as not to repeat a demonstration of something equivalent to MVT). Let $c < d$ be two points in $[a, b]$. We will prove that $f(c) < f(d)$. Clearly we can’t have $f(c) = f(d)$ as that would lead to vanishing of derivative $f'(x)$ somewhere in $(c, d)$ (Rolle’s theorem).

I will now show that we can’t have $f(c) > f(d)$. Suppose on the contrary that $f(c) > f(d)$. Then we can see that $f'(d) > 0$ implies that there is a point $c < e < d$ such that $f(e) < f(d)$ so that $f(e) < f(d) < f(c)$ and hence by intermediate value property there is a point $x_{0}$ between $c, e$ such that $f(x_{0}) = f(d)$. This would again lead to vanishing of derivative $f'(x)$ (Rolle’s theorem). Hence it follows that we can’t have $f(c) > f(d)$ and therefore we have $f(c) < f(d)$.

2) Proof Using Dedekind’s Theorem: This is bit tricky. If $f'(x) > 0$ it means that there is a neigborhood $I_{x}$ of $x$ such that if $y \in I_{x}, y < x$ then $f(y) < f(x)$ and if $y \in I_{x}, y > x$ then $f(y) > f(x)$. Let’s call this behavior of $f$ as $f$ is strictly increasing at $x$.

Now $f'(x) > 0$ for all $x \in [a, b]$ means that $f$ is strictly increasing at all points of $[a, b]$. Let $c < d$ be two points in $[a, b]$. We show that $f(c) < f(d)$. We divide all points $x$ of $[c, b]$ into sets $L$ and $R$ in the following manner. A point $x$ lies in $L$ if for all points $y \in [c, x]$ we have $f(y) > f(c)$. Otherwise $x$ lies in $R$. Clearly $L$ is non-empty $f'(c) > 0$ implies existence of points to the right of $c$ where $f$ takes values greater than $f(c)$. If $R$ is also non-empty then by Dedekind’s theorem there is a number $\alpha$ such that all numbers less than $\alpha$ lie in $L$ and all those greater than $\alpha$ lie in $R$. Note that $\alpha > c$ and we will show that $\alpha = b$ so that $R$ is empty.

Suppose that $\alpha < b$. If $f(\alpha) \geq f(c)$ then we can find points to the right of $\alpha$ at which $f(x) > f(\alpha) \geq f(c)$ so that points to the right of $\alpha$ also belong to $L$. This contradicts the conclusion of Dedekind’s theorem mentioned in previous paragraph. On the other hand if $f(\alpha) < f(c)$ then there are points to the left of $\alpha$ at which $f(x) < f(\alpha) < f(c)$ and this would mean again violate the conclusion that numbers less than $\alpha$ lie in $L$. Hence it follows that we must have $\alpha = b$. Note further that $\alpha = b$ must lie in $L$. If $b \in R$ then every number less than $b$ lies in $L$ which means that at these points $f(x) > f(c)$, but since $b \in R$ so there must be some point left of $b$ for which $f(x) \leq f(c)$.

We have thus show that all points of $[c, b]$ lie in $L$ and by definition of $L$ we must have $f(c) < f(x)$ for $x \in (c, b]$. Since $d \in (c, b]$ it follows that $f(c) < f(d)$.

The proof for the case when $f'(x) \geq 0$ is completed by the technique of dealing with $f(x) + \epsilon\cdot x$ and letting $\epsilon \to 0$ (mentioned by orangeskid).

Note: Both the proofs are taken from my beloved book “A Course of Pure Mathematics” by G. H. Hardy. Also I had assumed differentiability at the end points of interval. This is not really necessary and can be taken care of by using continuity of $f$ and end points. We can assume that points $c, d$ in the proofs above belong to $(a, b)$ instead of $[a, b]$. Suppose that $a < x < b$ and take $y, z$ such that $a < y < z < x < b$. Then $f(y) < f(z) < f(x)$. Taking limit as $y \to a^{+}$ we get $f(a) \leq f(z) < f(x)$ so that $f(a) < f(x)$ and similarly we can show that $f(x) < f(b)$.

Following @Shakespeare: Method $1$.

For any $\epsilon >0$ the function
$$f(x) + \epsilon\cdot x$$

is strictly increasing. Take $\epsilon \to 0$.

Let’s prove that $f$ is strictly increasing under the stronger hypothesis $f’>0$.

Consider the set
$$\{x \ | \ f \ \text{strictly increasing on}\ [a,x]\}$$

It contains $a$ so it’s nonvoid. Let $x^*$ be its supremum. Then $f$ is strictly increasing on $[a,x^*)$. Now since $f'(x^*)>0$ there exists $\delta > 0$ so that
$$x^* – \delta < x_1 \le x \le x_2 <x^* + \delta \ \text{and}\ x_1 < x_2 \Rightarrow f(x_1) < f(x_2)$$

We conclude that $f^*$ is strictly increasing on $[a,x^*]$, and, if $x^* < b$, that $f^*$ strictly increasing on $[a,x^*+ \delta)$. Therefore, $x^*=b$ and
$f^*$ strictly increasing on $[a,b]$.