The formalism behind integration by substitution

When you are doing an integration by substitution you do the following working.
$$\begin{align*}
u&=f(x)\\
\Rightarrow\frac{du}{dx}&=f^{\prime}(x)\\
\Rightarrow du&=f^{\prime}(x)dx&(1)\\
\Rightarrow dx&=\frac{du}{f^{\prime}(x)}\\
\end{align*}$$

My question is: what on earth is going on at line $(1)$?!?

This has been bugging me for, like, forever! You see, when I was taught this in my undergrad I was told something along the lines of the following:

You just treat $\frac{du}{dx}$ like a fraction. Similarly, when you are doing the chain rule $\frac{dy}{dx}=\frac{dy}{dv}\times\frac{dv}{dx}$ you “cancel” the $dv$ terms. They are just like fractions. However, never, ever say this to a pure mathematician.

Now, I am a pure mathematician. And quite frankly I don’t care if people think of these as fractions or not. I know that they are not fractions (but rather is the limit of the difference fractions as the difference tends to zero). But I figure I should start caring now…So, more precisely,

$\frac{du}{dx}$ has a meaning, but so far as I know $du$ and $dx$ do not have a meaning. Therefore, why can we treat $\frac{du}{dx}$ as a fraction when we are doing integration by substitution? What is actually going on at line $(1)$?

Solutions Collecting From Web of "The formalism behind integration by substitution"

Recall that $u$-substitution is really the inverse rule of the chain rule, just like integration by parts is the inverse rule of the product rule. The essence of the chain rule is that

$$ \frac{\mathrm{d}y}{\mathrm{d}x} = \frac{\mathrm{d}y}{\mathrm{d}u}\frac{\mathrm{d}u}{\mathrm{d}x},$$

which is why we like to write derivatives as ratios – often, when they look like they cancel, they really “do cancel,” so to speak.

A better way of writing $u$-substitution is to say that $\dfrac{\mathrm{d}u}{\mathrm{d}x} = f'(x)$, though we might as well notate this as $u'(x)$, since that’s what we’re really doing. Then

$$ \int g(u(x))u'(x) \mathrm{d}x = \color{#F01C2C}{\int g(u(x)) \frac{\mathrm{d}u}{\mathrm{d}x}\mathrm{d}x = \int g(u) \mathrm{d}u} = \int g(u) \mathrm{d}u,$$

where I’ve notated the important equality in red. The step in red is visibly related to the chain rule: the part that looks like it cancels really does cancel. $\diamondsuit$

The theme here is that this is valid because of the chain rule, and the notation is chosen to support the cancellation effects. The fact that people go around separating this very convenient notation is largely for different reasons, and/or because they are implying a good amount of knowledge of “differentials.”

We can even more directly relate this to the chain rule by giving a proof. Consider the function

$$ F(x) = \int_{0}^x g(t)\mathrm{d}t.$$

Consider the function $F(u(x))$ and differentiate it:

$$ \begin{align}
F(u(x))’ &= F'(u(x)) u'(x) = \frac{\mathrm{d}F}{\mathrm{d}u}\frac{\mathrm{d}u}{\mathrm{d}x}\\
&=\frac{\mathrm{d}}{\mathrm{d}u}\int_{0}^{u(x)} g(u(t))\mathrm{d}t \cdot u'(x)\\
&= g(u(x))u'(x).
\end{align}$$

The the second fundamental theorem of calculus says that

$$\begin{align}
\int_a^b g(u(x))u'(x)\mathrm{d}x &= F(u(b)) – F(u(a)) \\
&= \int_{a}^{b} g(u(t))u'(t)\mathrm{d}t \\
&=\int_{a}^{b}g(u(t))\frac{\mathrm{d}u}{\mathrm{d}t}\mathrm{d}t.
\end{align}$$

Of course, we also know that $\displaystyle F(u(b)) – F(u(a)) = \int_{u(a)}^{u(b)} g(t) \mathrm{d}t = \int_{u(a)}^{u(b)} g(u) \mathrm{d}u$.

Consider evaluating $\int (3x^2 + 2x) e^{x^3 + x^2} \, dx$ (as in this Khan Academy video).

Often teachers will say, let $u = x^3 + x^2$, and note that “$du = (3x^2 + 2x) dx$”.
Therefore, they say,
\begin{align}
\int (3x^2 + 2x) e^{x^3 + x^2} \, dx &= \int e^u du \\
&= e^u + C \\
&= e^{x^3 + x^2} + C.
\end{align}

However, this explanation is confusing because there’s no such thing as $du$ or $dx$.

A more clear (in my opinion) and perfectly rigorous explanation is just to notice that our integral has the form $\int f(g(x)) g'(x) dx$, and use the rule
\begin{equation}
\int f(g(x)) g'(x) dx = F(g(x)) + C
\end{equation}
where $F$ is an antiderivative of $f$. This rule is clearly true, because it’s nothing more than the chain rule in reverse. There’s no need to use any “infinitesimals” or anything.

One way to interpret $df$ (for $f \,:\, \mathbb{R} \to \mathbb{R}$ for simplicity)is to view it as a map $$
df \,:\, \mathbb{R}\to \left(\mathbb{R} \to \mathbb{R}\right) \,:\, c \mapsto \left(x \mapsto xF_c\right) \text{.}
$$
In plain english, $df$ is map which maps each point in $\mathbb{R}$ to a linear function $\mathbb{R} \to \mathbb{R}$. For each $c$, the linear map $(df)(c) = x \mapsto xF_c$ is the best linear approximation of $f$ at point $c$. We know, of course, that this means nothing other than that $F_c = f'(c)$ – after all, that’s one way to define the derivative – as the slope of the best linear approximation at point $c$.

So what is $\frac{du}{dv}$, then? It’s a quotient of maps, and if you interpret it simply point-wise, you get $$
\frac{du}{dv} = \frac{(c,x) \mapsto xU_c}{(c,x) \mapsto xV_c}
= (c,x) \mapsto \frac{xU_c}{xV_c} = (c,x) \mapsto \frac{U_c}{V_c} \text{.}
$$
This doesn’t depend on $x$ anymore, so we may re-interpret it as a function $\mathbb{R} \to \mathbb{R}$, and if $u=u(v)$ and $v$ is an independent variable, then $U_c = u'(c)$ and $V_c = 1$, so we get $\frac{du}{dv} \,:\, \mathbb{R} \to \mathbb{R} \,:\, c \mapsto u'(c)$, i.e. $\frac{du}{dv} = u’$.

When applying notions like chain rule and substitution we treat derivatives just like fractions, but the rules are slighly bent, since for multi variable chain rule:

if $\frac{\partial f(g(t),h(t))}{\partial t}= \frac{\partial f}{\partial g}\frac{\partial g}{\partial t}+\frac{\partial f}{\partial h}\frac{\partial h}{\partial t}$, but if we cancel these down we get $\frac{\partial f(g(t),h(t))}{\partial t}=2\frac{\partial f(g(t),h(t))}{\partial t}$.

But in one variable just like above, everything runs smoothly, and it is goodd to note the things like “$dx$” are infinitesimely small changes in x, so when we consider $du/dx$, we consider both “$du$” and “$dx$” as they become infinitesimely small, so we can manipulate them like fractions.

Consider the geometrical interpretation you have a right square with lengths $\Delta x$ and $\Delta u$ and $f’$ is actually $f’=k=\tan(\alpha)$ so you get $f’=k=\tan(\alpha)=\frac{\Delta u}{\Delta x}$. Now let $\Delta x \rightarrow 0$ and you get a definition of derivation… So du and dx have a meaning and doing something like $dx=\frac{du}{f'(x)}$ does have sense.

I know that they are not fractions […]

Well, by Non-standard analysis (following a book I referred on a comment about a similar answer), that’s where you’re wrong. And that is the premise supporting the whole question, if you said the opposite, you wouldn’t’ve made this question.

My question is: what on earth is going on at line (1)?!?

First, the $u$-substitution, while used in integration, is on its own an operation of differentiation. As differentiation is a function (on functions), and both sides are equal, the differentials must be equal. It is by definition of any function that $a=b\Rightarrow f(a)=f(b)$.

So, what is differentiation? It is the infinitesimal variation of the tangent of a function on a point. What you might be thinking is: why make a distinction between the variation on the tangent and on the function itself if they’re coincident when zooming in enough? The answer is this allows us to both define the derivative as a (hyper)real fraction (pun intended), and not simply and informally discard the smaller infinitesimals.

An accurate image to illustrate this is the following taken from the book:

http://i.stack.imgur.com/OqYKE.png

As the differential is on the tangent, and to know the tangent one should know the derivative, the former’s definition is $dy = f'(x)\ dx$. Note everything here are numbers, and by the transfer principal, usual rules of algebra apply.