Intereting Posts

Reading, Writing, and Proving Math: Cartesian Product
Infinite Series: Fibonacci/ $2^n$
Prove that $f_n$ converges uniformly on $\mathbb{R}$
Prove this formula for the Fibonacci Sequence
Proving that the given two integrals are equal
Sum of cubed roots
Ratio test with limsup vs lim
Fixed sum of combinations
Rate of divergence for the series $\sum |\sin(n\theta) / n|$
Simple proof that $\pi$ is irrational – using prime factors of denominator
Solving Recurrence equation
Examples of falsified (or currently open) longstanding conjectures leading to large bodies of incorrect results.
Understanding the definition of dense sets
Evaluate these infinite products $\prod_{n\geq 2}(1-\frac{1}{n^3})$ and $\prod_{n\geq 1}(1+\frac{1}{n^3})$
Can we define the normal set without $G$ being a group?

We are given the equation

$$\frac{1}{f(x)} \cdot \frac{d\left(f(x)\right)}{dx} = x^3.$$

To solve it, “multiply by $dx$” and integrate:

$\frac{x^4}{4} + C = \ln \left( f(x) \right)$

But $dx$ is not a number, what does it mean when I multiply by $dx$, what am I doing, why does it work, and how can I solve it without multiplying by $dx$?

Second question:

Suppose we have the equation $$\frac{d^2f(x)}{dx^2}=(x^2-1)f(x)$$

Then for large $x$, we have $\frac{d^2f(x)}{dx^2}\approx x^2f(x)$, with the approximate solution $ke \cdot \exp \left (\frac{x^2}{2} \right)$

Why is it then reasonable to suspect, or assume, that the solution to the original equation, will be of the form $f(x)=e^{x^2/2} \cdot g(x)$, where $g(x)$ has a simpler form then $f(x)$? When does it not work?

Third question:

The method of replacing all occurences of $f(x)$, and its derivatives by a power series, $\sum a_nx^n$, for which equations does this work or lead to a simpler equation?

Do we lose any solutions this way?

- Finding a polynomial with a given shape
- Find the value of $\int^1_{-1} x \ln(1^x +2^x +3^x +6^x)\,dx $
- Proof that $C\exp(x)$ is the only set of functions for which $f(x) = f'(x)$
- Proof of strictly increasing nature of $y(x)=x^{x^{x^{\ldots}}}$ on $[1,e^{\frac{1}{e}})$?
- Mean value theorem application for multivariable functions
- What is the history of this theorem about the finite sum of a polynomial?

- Finding the limit $\displaystyle\lim_{x\to 0+} \left(\frac{\sin x}x\right)^{1/{x^2}}$
- Periodic solution of $\dot{x} = a(t) x + b(t)$ with $a$ and $b$ periodic
- Sum with parameter convergence
- finding examples for a non negative and continuous function for which the infinite integral is finite but the limit at infinity doesn't exist
- Large relative error
- Prove $\sin a=\int_{-\infty}^{\infty}\cos(ax^2)\frac{\sinh(2ax)}{\sinh(\pi x)} \operatorname dx$
- Can open set in $\mathbb{R}^n$ be written as the union of countable disjoint closed sets?
- How to prove that $\int_{-\infty}^{\infty} \frac{1-\cos x}{x^2} dx$ equal to $\pi $?
- Tauber theorem?
- Limit $\lim_{x\to 0} \frac{\tan ^3 x - \sin ^3 x}{x^5}$ without l'Hôpital's rule.

**Re: 2nd question.** Our suspicion does not have to be reasonable. I think that some mathematical discoveries happened precisely because someone had a less-than-reasonable suspicion and was not too afraid or lazy to investigate. Noticing that the second derivative of $e^{x^2/2}$ is $(x^2+1)e^{x^2/2}$, which is not very different from our equation, we think that $f$ might have something to do with $e^{x^2/2}$. Or it might not. But we have to try something, right? So we try writing $f$ in the form $f(x)=e^{x^2/2}g(x)$. Plugging this into the equation, we get $g”+2xg’+2g=0$. This does not look much simpler than what we had. It seems that the ansatz $f(x)=e^{x^2/2}g(x)$ does not work. We have to keep thinking.

Thinking more, we notice (or should have noticed earlier) that the plus sign in $(e^{x^2/2})”=(x^2+1)e^{x^2/2}$ does not fit our equation as well as minus sign would. So we put the minus sign in: $(e^{-x^2/2})”=(x^2-1)e^{-x^2/2}$. When we plug $f(x)=e^{-x^2/2}g(x)$ into the equation, the term with $g$ drops out: we get $g”-2xg’=0$. The guess worked! We already have $g=C$ as one of the basis solutions, and we find the second one by solving the 1st order equation $h’-2xh=0$, etc.

**Re: 3rd question.** The power series method usually does not really tell you what a solution is, it’s more of a way to get an approximation to the solution. It works best for linear systems with polynomial coefficients, but can give some information even for seriously nonlinear equations (recent example). With this method you don’t get solutions that are not represented by a power series, in particular those that are singular at $0$. For example, try solving $2xy’+y=0$ with the power series method.

**FIRST QUESTION**

This is answered by studying the fundamental theorem of calculus, which basically (in this context) says that if on an interval $(a,b)$

\begin{equation}

\frac{dF}{dx} = f

\end{equation}

then,

\begin{equation}

\int^{b}_{a} f\left(x\right) dx = F(b) – F(a)

\end{equation}

where the integral is the limit of the Riemann sum. And then, you can specify anti-derivatives (indefinite integrals) as $F\left(x\right) + c$ by fixing an arbitrary $F\left(a\right) = c$ and considering the function

\begin{equation}

F\left(x\right) = \int^{x}_{a} f\left(x\right) dx

\end{equation}

where we do not specify the limits as they are understood to exist and are arbitrary. It is from here actually, that we get that multiplying by $dx$ rule.

For,

\begin{equation}

F = \int dF

\end{equation}

and hence

\begin{equation}

dF = f(x)dx \implies F = \int f(x) dx

\end{equation}

This is a little hand waiving, and for actual proofs, you would have to study Riemann sums.

Now, after this, for specific examples such as yours, I guess Andre Nicolas’s answer is better but still I will try to offer something similar.

We can say let $g\left(x\right) = \int x^{3} dx$ and $h\left(x\right) = log\left|f\left(x\right)\right|$. Then,

\begin{equation}

\frac{dh\left(x\right)}{dx} = \frac{dg\left(x\right)}{dx}

\end{equation}

for all $x \in \left(a,b\right)$

and hence, we can say that

\begin{equation}

h\left(x\right)= g\left(x\right) + C

\end{equation}

**More Deapth for Question 1:**

Andr’e noted that the idea of multiplying by dx is “mathematically dubious”, and in reality there are many situations where this technique will get you into trouble if you apply it haphazardly without understanding what’s going on in the background. To the end of providing that background I’m going to go throug the solution process of your first example with as much detail about each step as possible.

$$

{{1}\over{f(x)}}{{df}\over{dx}} = {{x^3}}

$$

casting this to an integration problem:

$$

\int {{f'(x)} \over {f(x)}}dx = \int {{x^3}} dx

$$

now here’s where things change from simply thinking about “multiplying” by a differential. in stead of using that (as mentioned before) problematic through process we’ll use integration by substitution, but for a very special case.

What this process really means geometrically is using the infinitesimal breakdown of distances along the function itself as the measure for our calculated area (in stead of the distance along the real line).

Literally stating:

$$

du = \lim_{h \to 0} {{f(x)-f(x+h) \over {h}}dx}

$$

which when plugged into the Riemann integral becomes:

$$

{\int {{f'(x)} \over {f(x)}}dx} = \lim_{max(x_i – x_0) \to 0}\sum_{i=1}^{n}{{f(x_i)-f(x_0)}\over {f(x_i)(x_i-x_0)}}{(x_i – x_0))}

$$

$$

= \lim_{max(x_i – x_0) \to 0}\sum_{i=1}^{n}{{1}\over {f(x_i)}}{(f(x_i)-f(x_0))}

$$

by taking the observation that our limit can be changed to:

$$

= \lim_{max(f(x_i) – f(x_0)) \to 0}\sum_{i=1}^{n}{{1}\over {f(x_i)}}{(f(x_i)-f(x_0))}

$$

wihtout altering the meaning (this result is actually non-trivial but I won’t get into that here) we have demonstrated that the integrals are equivalent.

This then results in the integral:

$$

\int {{1} \over {f(x)}}df(x) = \int {{x^3}} dx

$$

As a digression; often in calculus classes you’re taught this using a variable as a stand in for your function, such as $u = f(x)$, and then do something like $du = f'(x)dx$ to make things easier on your sensibilites for the time (as using a function as a variable isn’t usually as easy to grasp, and comes along with other issues). In this case I’ll follow both concepts through so you can see how they connect.

The subsitution method with $u = f(x)$ and $du = f'(x)dx$ would then look like:

$$

\int {{f'(x)dx} \over {u}} = \int {{du} \over {u}} = \int {{x^3}} dx

$$

and from here we either get (if I combine the constants for simplicity):

$$

ln(|f(x)|+c = {{x^4} \over {4}}

$$

or with the substitution:

$$

ln(|u|+c = {{x^4} \over {4}}

$$

where we need to put the f(x) back in for u (since they’re equal) and get:

$$

ln(|f(x)|+c = {{x^4} \over {4}}

$$

either way.

From here it’s just cancellations and algebra until you get your final result.

There are some important things of note here for a general problem of the type:

$$

g(f(x))f'(x) = h(x)

$$

Since the integral is actually some $ \int g(f(x)) df(x)$ if the integral of the function g fails to exist on the region of concern the solution cannot be found using this method. Somtimes by simply “cancelling” differentials, or “multiplying through” by them this gets obfuscated.

First question only: The differential equation can be rewritten as

$$\frac{f'(x)}{f(x)}=x^3.$$

Find antiderivatives of both sides. To calculate $\int \frac{f'(x)}{f(x)}\,dx$, note that by the Chain Rule, $\frac{f'(x)}{f(x)}$ is the derivative of $\log|f(x)|$.

We conclude that $\log|f(x)|+C_1=\frac{x^4}{4}+C_2$ for some constants $C_1$ and $C_2$, which may be combined into one.

There was no multiplication by $dx$. Yet we get the same result as by the admittedly dubious procedure of multiplying by $dx$. So at least the dubious procedure is harmless, it gives the correct result. So it can be thought of as a mnemonic device that helps us get to the right answer.

**Remark:** There is more to it than that, the process captures a certain useful physical intuition. And one can even make mathematical sense of naked $dx$.

- If $f$ is continuous and $f(x+y)=f(x)f(y)$, then $\lim\limits_{x \rightarrow 0} \frac{f(x)-f(0)}{x}$ exists
- point deflecting off of a circle
- $1989 \mid n^{n^{n^{n}}} – n^{n^{n}}$ for integer $n \ge 3$
- $x^2 +y^2 + z^2$ is irreducible in $\mathbb C $
- the image of normal subgroups
- Etymology of the word “normal” (perpendicular)
- Double dual of the space $C$
- Find the standard form of the conic section $x^2-3x+4xy+y^2+21y-15=0$
- Orientable Surface Covers Non-Orientable Surface
- How to prove that a polynomial of degree $n$ has at most $n$ roots?
- Why is $(1+\frac{3}{n})^{-1}=(1-\frac{3}{n}+\frac{9}{n^2}+o(\frac{1}{n^2}))$ and how to get around the Taylor expansion?
- Why is $\frac{\operatorname dy'}{\operatorname dy}$ zero, since $y'$ depends on $y$?
- $G_1/H\cong G_2\implies G_1\cong H\times G_2$?
- Geodesics of a Sphere in Cartesian Coordinates
- Average distance between two randomly chosen points in unit square (without calculus)