How does partial fraction decomposition avoid division by zero?

This may be an incredibly stupid question, but why does partial fraction decomposition avoid division by zero? Let me give an example:

$$\frac{3x+2}{x(x+1)}=\frac{A}{x}+\frac{B}{x+1}$$

Multiplying both sides by $x(x+1)$ we have:

$$3x+2=A(x+1)+Bx$$

when $x \neq -1$ and $x \neq 0$.

What is traditionally done here is $x$ is set to $-1$ and $0$ to reveal:
$$-3+2=-B \implies 1=B$$
and
$$2=A$$

so we find that

$$\frac{3x+2}{x(x+1)}=\frac{2}{x}+\frac{1}{x+1}$$

Why can $x$ be set equal to the roots of the denominator (in this case, $0$ and $-1$) without creating a division by zero problem?

Solutions Collecting From Web of "How does partial fraction decomposition avoid division by zero?"

Hint $\ $ If $\rm\:f(x),\,g(x)\,$ and $\rm\:h(x)\!\ne\! 0\:$ are polynomial functions over $\rm\:\mathbb R\:$ (or any infinite field) then

$$\rm\begin{eqnarray}\rm \dfrac{f(x)}{h(x)} = \dfrac{g(x)}{h(x)} &\Rightarrow&\rm\ f(x) = g(x)\ \ for\ all\,\ x\in\mathbb R\, \ such\ that\,\ h(x)\ne 0\\
&\Rightarrow&\rm\ f(x) = g(x)\ \ for\ all\ \,x\in \mathbb R
\end{eqnarray}$$

since $\rm\:p(x) = f(x)\!-\!g(x) = 0\:$ has infinitely many roots [all $\rm\,x\in \mathbb R\,$ except the finitely many roots of $\rm\:h(x)$], $ $ so $\rm\:p\:$ is the zero polynomial, since a nonzero polynomial over a field has only finitely many roots (no more than its degree). Hence $\rm\: 0 = p = f -g\:\Rightarrow\: f = g.$

Thus to solve for the variables that occur in $\rm\:g\:$ it is valid to evaluate $\rm\:f(x) = g(x)\:$ at any $\rm\:x\in \mathbb R,\:$ since it holds true for all $\rm\:x\in \mathbb R\:$ (which includes all real roots of $\rm h).$

Remark $ $ The method you describe is known as the Heaviside cover-up method. It can be generalized to higher-degree denominators as I explain here.

Good question! This is my crude interpretation (see Bill’s answer for a shot of rigor)

What is actually being equated is the numerator, not the denominator. So in your example, you have that

$$\frac{{3x + 2}}{{x\left( {x + 1} \right)}} = \frac{A}{x} + \frac{B}{{x + 1}}$$

if

$$\frac{{3x + 2}}{{x\left( {x + 1} \right)}} = \frac{{A\left( {x + 1} \right) + Bx}}{{x\left( {x + 1} \right)}}$$

if $${3x + 2 = A\left( {x + 1} \right) + Bx}$$

$$3x + 2 = \left( {A + B} \right)x + A$$

which implies

$${A + B}=3$$

$$A=2$$

which in turn gives what you have.

When we equate numerators we “forget” about the denominators. We’re focused in the polynomial equality

$$3x + 2 = \left( {A + B} \right)x + A$$

only. Thought it might be unsettling to be replacing by the roots of the denominators, we’re not operating on that, so we’re safe.

Paying careful attention to the logic of the first step, we are saying that (for a given $A$ and $B$), the equation

$$ \frac{3x+2}{x(x+1)}=\frac{A}{x}+\frac{B}{x+1} $$

holds for all $x \neq 0,-1$ if and only if the equation

$$ 3x+2=A(x+1)+Bx $$

holds for all $x \neq 0,-1$.

Now, if we can find an $A$ and a $B$ so that $3y+2=A(y+1)+By$ holds for all values of $y$, then clearly $3x+2=A(x+1)+Bx $ holds for all $x \neq 0,-1$. So if substituting $y=0$ and $y=-1$ allows us to find $A$ and $B$, then we get a good answer.

Incidentally, a stronger statement is true: the equation

$$ 3x+2=A(x+1)+Bx $$

holds for all $x \neq 0,-1$ if and only if the equation

$$ 3y+2=A(y+1)+By $$

holds for all $y$. So this guarantees that we don’t lose any solutions to the former problem when we solve it by instead considering the latter problem.

Aside: if one pays attention to what they mean, one doesn’t really need to to introduce a new dummy variable $y$. However, I hoped it might add a bit more clarity if the variable $x$ is always restricted to be $\neq 0,-1$.

It may be useful to note that you use a similar sort of reasoning for limits. e.g. to find the value of

$$ \lim_{x \to 0} \frac{x^2}{x} $$

you observe that $x^2/x = x$ for all $x \neq 0$ so that

$$ \lim_{x \to 0} \frac{x^2}{x} = \lim_{x \to 0} x$$

and then you apply the fact that $x$ is continuous at $0$ to obtain

$$\lim_{x \to 0} x = 0$$

It’s a legit question. The key point is the following lemma

Let $f(x)=p_1(x)/q_1(x)$, $g(x)=p_2(x)/q_2(x)$ be two rational functions. If $f(x)=g(x)$ for all $x$ s.t. $q_1(x)\neq 0$ and $q_2(x)\neq 0$, then $p_1(x)=p_2(x)$ everywhere. This implies that you can get rid of the denominators (for instance multiplying both sides by the least common denominator) and enforcing the equality of functions only in the numerators.

Well, generalizing the question to the complex field and thus
to meromorphic functions,
we can tell that the division by zero is actually sought for instead than avoided.

In fact, we are developing $f(x)$ (that is $f(z)$) precisely around its poles and then taking advantage of
the Mittag-Leffler’s Theorem.

Then, in case of a rational function, if the poles are simple and the degree of the polynomial at the numerator is less than that at the denominator,
we are assured that we can put
$$
f(z) = {{p(z)} \over {\left( {z – z_1 } \right) \cdots \left( {z – z_n } \right) \cdots }} = {{a_{\,1} } \over {\left( {z – z_1 } \right)}} + \cdots + {{a_{\,n} } \over {\left( {z – z_n } \right)}} + \cdots
$$
and that
$$
\mathop {\lim }\limits_{z \to z_k } f(z) = a_{\,k} = {{p(z_k )} \over {\left( {z_k – z_1 } \right) \cdots \left( {z_k – z_n } \right) \cdots }}\quad \left| {\;n \ne k} \right.
$$