Why is integration so much harder than differentiation?

If a function is a combination of other functions whose derivatives are known via composition, addition, etc., the derivative can be calculated using the chain rule and the like. But even the product of integrals can’t be expressed in general in terms of the integral of the products, and forget about composition! Why is this?

Solutions Collecting From Web of "Why is integration so much harder than differentiation?"

Here is an extremely generic answer. Differentiation is a “local” operation: to compute the derivative of a function at a point you only have to know how it behaves in a neighborhood of that point. But integration is a “global” operation: to compute the definite integral of a function in an interval you have to know how it behaves on the entire interval (and to compute the indefinite integral you have to know how it behaves on all intervals). That is a lot of information to summarize. Generally, local things are much easier than global things.

On the other hand, if you can do the global things, they tend to be useful because of how much information goes into them. That’s why theorems like the fundamental theorem of calculus, the full form of Stokes’ theorem, and the main theorems of complex analysis are so powerful: they let us calculate global things in terms of slightly less global things.

The family of functions you generally consider (e.g., elementary functions) is closed under differentiation, that is, the derivative of such function is still in the family. However, the family is not in general closed under integration. For instance, even the family of rational functions is not closed under integration because you $\int 1/x = \log$.

Answering an old question just because I saw it on the main page. From Roger Penrose (Road To Reality):

… there is a striking contrast between the operations of differentiation and integration, in this calculus, with regard to which is the ‘easy’ one and which is the ‘difficult’ one. When it is a matter of applying the operations to explicit formulae involving known functions, it is differentiation which is ‘easy’ and integration ‘difficult’, and in many cases the latter may not be possible to carry out at all in an explicit way. On the other hand, when functions are not given in terms of formulae, but are provided in the form of tabulated lists of numerical data, then it is integration which is ‘easy’ and differentiation ‘difficult’, and the latter may not, strictly speaking, be possible at all in the ordinary way. Numerical techniques are generally concerned with approximations, but there is also a close analogue of this aspect of things in the exact theory, and again it is integration which can be performed in circumstances where differentiation cannot.

$\renewcommand{d}{\mathrm{d}}$

I guess the OP asks about the symbolic integration. Other answers already dealt with the numeric case where integration is easy and differentiation is hard.

If you recall the definition of the differentiation you can see it’s just a subtraction and division by a constant. Even if you can’t do any algebraic changes, it won’t get any more complex than that. But usually you can do many simplifications due to the zero limit as many terms fall out as being too small. From this definition it can be shown that if you know the derivative of $f(x)$ and $g(x)$, then you can use these derivatives to express the derivative of $f(x) \pm g(x)$, $f(x)g(x)$ and $f(g(x))$.
This makes symbolic differentiation easy as you just need to apply the rules recursively.

Now about integration. Integration is basically an infinite sum of small quantities. So if you see an $\int f(x) \d x$. You can imagine it as an infinite sum of $(f_1 + f_2 + …)\d x$ where $f_i$ are consecutive values of the function.

This means if you need to calculate integral of $\int (a f(x) + b g(x))\d x$. Then you can imagine the sum $((af_1 + bg_1) + (af_2 + bg_2) + …) \d x$. Using the associativity and distributivity, you can transform this into: $a(f_1 + f_2 +…)\d x + b(g_1 + g_2 + …)\d x$.
So this means $\int (a f(x) + b g(x))\d x = a \int f(x) \d x + b \int g(x) dx$.

But if you have $\int f(x) g(x) \d x$. You have the sum $(f_1 g_1 + f_2 g_2 + …)\d x$. From which you cannot factor out the sum of $f$s and $g$s. This means there is no recursive rule for multiplication.

Same goes for $\int f(g(x)) \d x$. As you cannot extract anything from the sum $(f(g_1) + f(g_2) + …) \d x$ in general.

So so far only the linearity is the property that’s very useful. Let’s see what about reversal of the diffential rules. We have the product rule:
$$\frac{\d f(x)g(x) }{\d x} = f(x) \frac{\d g(x)}{\d x} + g(x) \frac{\d f(x)}{\d x}$$. Integrating both sides and rearranging the terms we get the well known integral by parts formula:

$$\int f(x) \frac{\d g(x)}{\d x} \d x = f(x)g(x) – \int g(x) \frac{\d f(x)}{\d x} \d x$$.

But this formula is only useful if $\frac{\d f(x)}{dx} \int g(x) \d x$ or
$\frac{\d g(x)}{dx} \int f(x) \d x$ is easier to integrate than $f(x)g(x)$.

And it’s often hard to see when this rule is useful. For example when you try to integrate $\mathrm{ln}(x)$ it’s not very obvious to see that it’s $1 \mathrm{ln}(x)$ and the integral of $1$ is $x$ and the derivative of $\mathrm{ln}(x)$ is $\frac{1}{x}$. Leading to a very simple integral of $x\frac{1}{x} = 1$, whose integral is again $x$.

Another well known differential rule is the chain rule $$\frac{\d f(g(x))}{\d x} = \frac{\d f(g(x))}{\d g(x)} \frac{\d g(x)}{\d x}$$.

Integrating both sides you get the reverse chain rule:

$$f(g(x)) = \int \frac{\d f(g(x))}{\d g(x)} \frac{\d g(x)}{\d x} \d x$$.

But again it’s hard to see when it is useful. For example what about the integration of $\frac{x}{\sqrt{x^2 + c}}$? Is it obvious to you that $\frac{x}{\sqrt{x^2 + c}} = 2x \frac{1}{2\sqrt{x^2 + c}}$ and this is the derivative of $\sqrt{x^2 + c}$? I guess not, unless someone showed you the trick.

While during differentiation you can mechanically apply the rules. During integration you need to recognize patterns and even need to introduce cancellations to bring the expression into the desired form and this requires lot of practice and intuition.

For example how would you integrate $\sqrt{x^2 + 1}$?

First you turn it into a fraction:

$$\frac{x^2 + 1}{\sqrt{x^2+1}}$$

Then multiply and divide by 2:

$$\frac{2x^2 + 2}{2\sqrt{x^2+1}}$$

Separate the terms like this:

$$\frac{1}{2}\left(\frac{1}{\sqrt{x^2+1}}+\frac{x^2+1}{\sqrt{x^2+1}}+\frac{x^2}{\sqrt{x^2+1}} \right)$$

Play with 2nd and 3rd term:

$$\frac{1}{2}
\left(
\frac{1}{\sqrt{x^2+1}}+
1\sqrt{x^2+1}+
x2x\frac{1}{2\sqrt{x^2+1}}
\right)$$

Now you can see the first bracketed term is the derivative of $\mathrm{arsinh(x)}$. The second and third term is the derivative of the $x\sqrt{x^2+1}$. Thus the integral will be:

$$\frac{\mathrm{arsinh}(x)}{2} + \frac{x\sqrt{x^2+1}}{2} + C$$

Was these transformations obvious to you? Probably not… That’s why differentiation is just a mechanic while integration is an art…

In the MIT lecture 6.001 “Structure and Interpretation of Computer Programs” by Sussman and Abelson this contrast is briefly discussed in terms of pattern matching. See the lecture video (at 3:56) or alternatively the transcript (p. 2 or see the quote below). The book used in the lecture does not provide further details.

Edit: Apparently, they discuss the Risch algorithm. It might be worthwhile to have a look at the same question on mathoverflow.SE: Why is differentiating mechanics and integration art?

And you know from calculus that it’s easy to produce derivatives of
arbitrary expressions. You also know from your elementary calculus
that it’s hard to produce integrals. Yet integrals and derivatives are
opposites of each other. They’re inverse operations. And they have the
same rules. What is special about these rules that makes it possible
for one to produce derivatives easily and integrals why it’s so hard?
Let’s think about that very simply.

Look at these rules. Every one of these rules, when used in the
direction for taking derivatives, which is in the direction of this
arrow, the left side is matched against your expression, and the right
side is the thing which is the derivative of that expression. The
arrow is going that way. In each of these rules, the expressions on
the right – hand side of the rule that are contained within
derivatives are subexpressions, are proper subexpressions, of the
expression on the left – hand side.

So here we see the derivative of the sum, with is the expression on
the left – hand side is the sum of the derivatives of the pieces. So
the rule of moving to the right are reduction rules. The problem
becomes easier. I turn a big complicated problem it’s lots of smaller
problems and then combine the results, a perfect place for recursion
to work.

If I’m going in the other direction like this, if I’m trying to
produce integrals, well there are several problems you see here. First
of all, if I try to integrate an expression like a sum, more than one
rule matches.
Here’s one that matches. Here’s one that matches. I
don’t know which one to take. And they may be different. I may get to
explore different things. Also, the expressions become larger in that
direction. And when the expressions become larger, then there’s no
guarantee that any particular path I choose will terminate, because we
will only terminate by accidental cancellation.
So that’s why
integrals are complicated searches and hard to do.

I will try to bring this to you in another way .Let us start by thinking in terms of something as simple as a straight line .
If I give you the equation of a line y = mx + c , it’s slope can be easily determined which in this case is nothing but m .
.Now let me make the question a bit trickier .Let me say that the line given above intersects the x and y axis at some points .I ask you to give me the area between the line,the abcissa and the ordinate

This is obviously not as easy as finding the slope .You shall have to find the intersection of the line with the axis and get two points of intersection and then taking the origin as a third point find the area .
This is not the only method of finding the area as we know there are loads of formulas for finding the area of a triangle .
Let us now view this in terms of curves .If the simple process of finding the slope in case of a line is translated to curves we get differential calculus which is a bit more complicated than the method of finding slopes of straight lines .

Add finding the area under the curve to that and you get integral calculus which by our experience from straight lines we know should be much harder than finding the slope ie differentiation .Also there is no one fixed method for finding the area of a figure .hence the Many methods of. Integration.