I don’t understand this step on this article.
$$A'(x)u_1(x) + B'(x)u_2(x) = 0$$
why we desire A=A(x) and B=B(x) to be of this form? What is the basis that this form is valid?
This is closely tied to the method of osculating parameters.
Suppose we wish to represent, with constant coefficients, some arbitrary function $u(x)$ with two linearly independent functions $u_1(x)$ and $u_2(x)$,
$$u(x) = A u_1(x) + B u_2(x).$$
In general this can not be done.
The best we can do is match the value of the function and its derivative at some point $x_0$,
u(x_0) &=& A u_1(x_0) + B u_2(x_0) \\
u'(x_0) &=& A u_1′(x_0) + B u_2′(x_0).
The conditions above determine the osculating parameters, the constants $A$ and $B$.
$A$ and $B$ will be different depending on the point $x_0$.
In general this fit will be poor at points far from $x_0$.
The method of variation of parameters involves finding the osculating parameters $A$ and $B$ at every point.
That is, we let $A$ and $B$ be functions of $x$.
The condition that they are the osculating parameters is that they satisfy
u_G(x) &=& A(x) u_1(x) + B(x) u_2(x) \\
u_G'(x) &=& A(x) u_1′(x) + B(x) u_2′(x),
just as above.
For the second equation to hold it must be the case that
$$A'(x)u_1(x) + B'(x)u_2(x) = 0.$$
I suggest you to read this page. In particular
Here’s the assumption. Simply to make the first derivative easier to deal with we are going to assume that whatever $u_1(t)$ and $u_2(t)$ are they will satisfy the following […]
Now, there is no reason ahead of time to believe that this can be done. However, we will see that this will work out. We simply make this assumption on the hope that it won’t cause problems down the road and to make the first derivative easier so don’t get excited about it.