Update I ‘repaired’ this method, but it changed a lot and I have some different questions, so I posted it separately here.
As training for the olympiad, I have to solve a lot of inequalities. Recently, I found a very general method to solve inequalities. I think it’s too good to be true, but if it is true, I think an implementation of the algorithm in a computer program could be used to very quickly prove all kinds of inequalities.
First, I’ll explain the method. Then I’ll ask my questions. At the very bottom, I proved a simple inequality using my method as sort of a demonstration.
Say we have to prove a certain inequality that looks like:
$$f(a,b,c\ldots)\ge g(a,b,c\ldots)$$
for all $a,b,c\ldots\in \Bbb{R}$. Here $a,b,c\ldots$ are simply de independent variables. There might be just one, there might be $21$. It doesn’t matter.
First, we rearrange:
$$f(a,b,c\ldots)-g(a,b,c\ldots)\ge 0$$
Now, in order to prove our inequality, we have to prove that the LHS has a minimum value and that that minimum value is greater than or equal to $0$. Say that the inequality is true. Let $a_0,b_0,c_0\ldots$ be the values of $a,b,c\ldots$ for which the LHS is minimal. Now, we must have:
$$\frac{d\,(f-g)}{d\, a}(a_0,b_0,c_0\ldots)=0$$
because otherwise, we could increase or decrease $a$ and get a lower value for the LHS, contradicting our definition of $a_0$. In fact, we have:
\begin{align*}
\frac{d\,(f-g)}{d\,a}(a_0,b_0,c_0\ldots) &= 0\\
\frac{d\,(f-g)}{d\,b}(a_0,b_0,c_0\ldots) &= 0\\
\frac{d\,(f-g)}{d\,c}(a_0,b_0,c_0\ldots) &= 0\\
&\vdots
\end{align*}
So we get as many equations as variables. Now we have a nice system of equations to work with. There are three options.
I have two questions.
Prove that for all $a,b,c\in\mathbb{R}$, we have:
$$ab+bc+ca+a-c\le 1+\frac13(a+b+c)^2$$
Proof: First, rearrange:
$$1+\frac13(a+b+c)^2-ab-bc-ca-a+c\ge 0$$
Now, take the derivatives with respect to $a$, $b$ and $c$ and set them equal to $0$:
\begin{align*}
\frac23(a+b+c)-b-c-1&=0\\
\frac23(a+b+c) -a-c&= 0\\
\frac23(a+b+c) – b – c + 1 &= 0
\end{align*}
Some algebra to give us a nicer system:
\begin{align*}
b+c+3 &= 2a\\
a+c &= 2b\\
b+c-3 &= 2c
\end{align*}
So $c = 2b-a$. Plugging it into the first equation gives:
\begin{align*}
b + 2b – a + 3&= 2a\\
3b + 3 &= 3a\\
b &= a-1
\end{align*}
plugging that into the third equation gives:
\begin{align*}
a-1+c-3&=2c\\
a-2 &= c
\end{align*}
and that’s the solution to our system; $b=a-1$, $c=a-2$. Plugging this into the original inequality gives:
\begin{align*}
ab+bc+ca+a-c &\le 1+\frac13(a+b+c)^2\\
a(a-1) + (a-1)(a-2) + (a-2)a + a – (a-2) &\le 1+ \frac13(3a-3)^2\\
a^2 – a + a^2 – 3a + 2 + a^2-2a + 2 &\le 1+\frac13(9a^2-18a+9)\\
3a^2 – 6a + 4 &\le 1 + 3a^2 – 6a + 3\\
3a^2 -6a + 4 &\le 3a^2 -6a + 4
\end{align*}
Which always holds and therefore the original inequality always holds. Q.E.D.
Actually, this is one approach to doing inequalities, and sometimes it is used. But there are some caveats that need to be paid attention to:
It is not always the case that those multi-variable functions are differentiable at all points that you are interested in. (And in some cases, the function is differentiable a.e., but the function is so bizarre that its derivative does not contain too much information about the function’s variation – this is rare though, usually rises theoretically but not in practical use cases.)
Even if 1 is true, sometimes it is not easy to get the differentiation of the function – it could be tricky to calculate differential values for certain points, or the derivative is very involving for the function expressed in nested combinations of elementary operations/functions, or the derivative may not be in closed-form if the function is not elementary, etc. (Plus if the differentiation is in a complicated form, it is also difficult to solve the equations of which derivatives equal to zero.)
Even if 2 is true, it is not guaranteed that the point you get by having first-order partial derivatives equaling to zero is local minimum/maximum, you might just get a saddle point – thus second-order derivatives need to be checked as well (notice also there is no guarantee that second-order derivatives exist)
Even if 3 is true, you still cannot be sure that the local maximum/minimum is global maximum/minimum – e.g. it could be possible that points at the boundary are larger than local maximum point you get. So check more before you make the final conclusion – For example, when finding the global maximum:
The answer by @yujie-zha misses a few important points, so let me elaborate on this.
Congratulations, you rediscovered a well-known theorem which is sometimes called in France “principe de Fermat,” sharing the same name than a reflection law in optics and the remark according to which there is no strictly decreasing sequence in positive numbers. Pierre de Fermat was a French mathematician living in the 17th century.
Your discussion covers a lot of useful cases but it has two major flaws:
A point where the quantity $F = f – g$ reaches its minimum does not need to exist. A simple example is provided by $1/x$ for $x\gt0$.
If such a point exists, the geometry of the set where the minimum of $F$ is realised can be arbitrarily complicated, so even if it has infinitely many points, in does not need to be so nice that you actually can remove a variable. Functions $F$ which allow to remove variables are called submersions. Furthermore, finding that point can be impracticable.
Regarding 1, a common hypothesis guaranteeing that the minimum is actually realised is when the function $F$ we study is continuous and defined over a compact domain. This might not tell you much for now, but you can realise what happen if you try to build a function always positive on the real line but whose minimum “flees to the infinity” – a colloquial way to say that it’s asymptotically zero, if you are already familiar with these terms.
Regarding 2, a theorem by Borel states that any closed subset of an euclidean space is the zero locus of a smooth function. This will also probably not tell you much, but you can try to find functions $F$ that are positive, everywhere differentiable over the real line and whose zero locus is:
You can also look for similar examples in higher dimensions.
This aside, the method you suggest actually works, but happens to be not as easy to use as you seem to think. Other common sources for inequality are convex functions, esp. Jensen’s inequality, the Cauchy-Schwarz or Cauchy-Bouniakovski inequality, and the modulus of a holomorphic function.
There are a few issues with this method.
Of course it could be argued that part of solving an inequality always involves a proof, but still it is important to know whether you are trying to show that a given inequality holds for all values of the variable(s), or whether it holds for some values of the variable(s).
Now, in order to prove our inequality, we have to prove that the LHS has a minimum value and that that minimum value is greater than or equal to 0
This is not true. For a simple counterexample, consider the inequality $e^{-x}>-2$, which is definitely true for all (real) $x$. But the function $f(x)=e^{-x}+2$ has no minimum value. It has a lower bound, even a greatest lower bound (namely $f(x) \ge 2$) but it never actually attains that greatest lower bound.
Even if a function does attain a minimum value, there is no guarantee that the derivative there is $0$. For example, the function $f(x)=x^{2/3}$ is defined on all of $\mathbb{R}$ and attains its minimum value at $x=0$, but the derivative there is not $0$ because the function isn’t differentiable at $x=0$. In general, remember that critical values of a function occur where the derivative is zero or undefined.
If you are dealing with a function that is defined on some restricted domain (rather than on the whole real line), a minimum value might be attained at an endpoint, rather than at a critical value. For example if we consider the function $f(x)=\tan(x)$ defined on $[0, \pi/2)$ (which would be the reasonable domain if $x$ represents an angle in a right triangle) then the minimum occurs at $x=0$.
Having said all of this, it is true that if you can find the global minimum of a function, and show that minimum value is positive, that would establish the inequality you are trying to prove. But that’s a pretty big “if”.