Why do this algorithm for finding an equation whose roots are cubes of the roots of the given equation works?

Let a polynomial, $p(x)$, of degree $n$ is given. Our aim is to find another polynomial, $q(x)$, whose roots are the cubes of the roots of $p(x)$.

Our algorithm go like this:

Step 1 Replace $x$ by $x^\frac{1}{3}$.

Step 2 Collect all the terms involving $x^\frac{1}{3}$ and $x^\frac{2}{3}$ on one side.

Step 3 Cube both the sides and simplify.

Although I get the correct answer by following this algorithm but I can’t get my head around the reasoning behind its working. So, can you kindly help me to figure it out?


$\fbox{EDIT 1:}$

Proof that the final equation of $q(x)$ so obtained will be a polynomial:

We originally had $$p(x)=0$$

Step 1 Replace $x$ by $x^\frac{1}{3}$.

After replacing $x$ by $x^\frac{1}{3}$ we get an equation like
$$p_{1}(x)+x^\frac{2}{3}p_{2}(x)+x^\frac{1}{3}p_{3}(x)=0$$

Here, $p_{1}(x)$, $p_{2}(x)$ and $p_{3}(x)$ are polynomials in $x$.

Step 2 Collect all the terms involving $x^\frac{1}{3}$ and $x^\frac{2}{3}$ on one side.

Now we have
$$x^\frac{2}{3}p_{2}(x)+x^\frac{1}{3}p_{3}(x)=-p_{1}(x) \qquad(1)$$

Step 3 Cube both the sides and simplify.

Cubing both the sides of $(1)$ we get
$$x^{2}p^{3}_{2}(x)+xp^{3}_{3}(x)+3xp_{2}(x)p_{3}(x)[x^\frac{2}{3}p_{2}(x)+x^\frac{1}{3}p_{3}(x)]=-p^{3}_{1}(x)$$

$$p^{3}_{1}(x)+x^{2}p^{3}_{2}(x)+xp^{3}_{3}(x)+3xp_{2}(x)p_{3}(x)[x^\frac{2}{3}p_{2}(x)+x^\frac{1}{3}p_{3}(x)]=0 \qquad(2)$$

Now, from $(1)$ and $(2)$ we have
$$p^{3}_{1}(x)+x^{2}p^{3}_{2}(x)+xp^{3}_{3}(x)+3xp_{2}(x)p_{3}(x)[-p_{1}(x)]=0$$

Therefore, $q(x)=p^{3}_{1}(x)+x^{2}p^{3}_{2}(x)+xp^{3}_{3}(x)-3xp_{1}(x)p_{2}(x)p_{3}(x)$

Clearly, $q(x)$ is a polynomial.

Solutions Collecting From Web of "Why do this algorithm for finding an equation whose roots are cubes of the roots of the given equation works?"

The canonical method to derive $q(x)$ is to eliminate $u$ between $p(u)=0$ and $u^3=x\,$, which can be done using polynomial resultants. By the definition of the resultant, it will be $0$ iff the two equations have a common root i.e. $q(x)=0$ for $x=u^3$ where $u$ is a root of $p$.

The derivation of $q(x)$ in the posted question amounts to manually calculating the resultant.

As an alternative way to derive $q(x)$, multiply your first equation by $\sqrt[3]{x}$ twice:

$$
\begin{cases}
\begin{alignat}{3}
p_{1} & \cdot 1 + p_{2} && \cdot \sqrt[3]{x^2} + \;\;p_{3} & \cdot \sqrt[3]{x} = 0 \\
x \,p_{2} & \cdot 1 + p_{3} && \cdot \sqrt[3]{x^2} + \;\;p_{1} & \cdot \sqrt[3]{x} = 0 \\
x p_{3} & \cdot 1 + p_{1} && \cdot \sqrt[3]{x^2} + x\,p_{2} & \cdot \sqrt[3]{x} = 0
\end{alignat}
\end{cases}
$$

Eliminating $\sqrt[3]{x^2}, \sqrt[3]{x}\,$ between the $3$ equations will give a polynomial condition in $x$. The elimination is equivalent to considering the above as a linear homogeneous system in $\left(1, \sqrt[3]{x^2}, \sqrt[3]{x}\right)\,$, for which the condition to have non-trivial solutions is that its determinant be $0$, which then gives a polynomial $q(x)$ in $x\,$:

$$
0 = \begin{vmatrix}
p_1 & p_2 & p_3 \\
x\,p_2 & p_3 & p_1 \\
x\,p_3 & p_1 & x\,p_2
\end{vmatrix} = 3\,x\,p_1\,p_2\,p_3 – p_1^3 – x^2\,p_2^3 – x\,p_3^3
$$


[ EDIT ]   Following the question in a comment, below is a step-by-step derivation of $q(x)$ in the simpler (but entirely similar) case where $q$ has as roots the squares (not cubes) of the roots of $p$.

The problem amounts to obtaining a polynomial equation in $x$ by eliminating $u$ between: $$p(u) = 0 \tag{1} $$ $$x = u^2 \tag{2}$$

First, collect the even powers in $p(u)$ under $q_0(u^2)$ and the odd powers under $q_1(u^2) \cdot u$ so that: $$p(u) = q_0(u^2) + q_1(u^2) \cdot u \tag{3}$$ with: $$\max(\,2 \deg q_0, \,1 + 2 \deg q_1\,) = \deg p\tag{4}$$

Replace $u^2=x\,$ per $(2)\,$: $$p(u) = q_0(x) + q_1(x) \cdot u \tag{5}$$

Multiply by $u$ and again per $(2)$ remember that $u^2=x\,$: $$p(u) \cdot u = q_0(x) \cdot u + q_1(x) \cdot x \tag{6}$$

Let $u$ be a root of $p(u)=0$ and rearrange $(5),(6)$ as:

$$
\begin{cases}
\begin{alignat}{3}
q_0(x) & + q_1(x) && \cdot u = 0 \\
x \cdot q_1(x) & + q_0(x) && \cdot u = 0 \\
\end{alignat}
\end{cases} \tag{7}
$$

Eliminate $u$ between the two linear equations: $$q_0^2(x)-x\,q_1^2(x)=0 \tag{8}$$

Finally, define: $$q(x)=q_0^2(x)-x\,q_1^2(x) \tag{9}$$

The previous steps prove that $p(u)=0 \implies q(x)=0$ where $x=u^2$ is the square of $u\,$. Also, $\deg q = \deg p$ per $(4)$ so $q$ can have no more roots other than the squares of the roots of $p\,$. [*]
(Rigorously, the latter conclusion only holds as stated if all roots of $p$ are simple, and no two have the same square. It can be proved in the general case as well, but the proof is more involved.)

For a worked out example, consider $p(u)=u^2-3u+2$ with the roots $u_1=1,\,u_2=2\,$:

$$
\begin{cases}
q_0(u^2) = u^2 + 2 \\
q_1(u^2) = -3
\end{cases} \tag{3}
$$

$$
q(x)=q_0^2(x)-x\,q_1^2(x)=(x+2)^2-9 x=x^2 – 5 x + 4 \tag{9}
$$

The latter has the roots $\,x_1= 1 = u_1^2, \,x_2=4=u_2^2\,$.