Intereting Posts

How to differentiate CDF of Gamma Distribution to get back PDF?
Advice on finding counterexamples
Evaluate $\int_0^1\ln(1-x)\ln x\ln(1+x) \mathrm{dx}$
how do you find the modular inverse
How to calculate the cost of Cholesky decomposition?
What are the odds of hitting exactly 100 rolling a fair die
If M is a non-orientable closed connected 3 manifold prove H1(M) is an infinite group.
Probability of drawing a run of a specific color from an urn with two colors of balls
Find $\lim_{n\to\infty}\sqrt{6}^{\ n}\underbrace{\sqrt{3-\sqrt{6+\sqrt{6+\dotsb+\sqrt{6}}}}}_{n\text{ square root signs}}$
Prove that $d^2x/dy^2$ equals $-(d^2y/dx^2)(dy/dx)^{-3}$
An axiomatic treatment of hyperbolic trigonometry?
A module as an external direct product of the kernel and image of a function
A one-tape Turing machine which operates in linear time can only recognize a regular language
Proving that any common multiplication of two numbers is a multiplication of their least common multiplication
This tower of fields is being ridiculous

I have an inhomogenous ODE. The main issue here is variables are matrices. It is bit of matrix calculus. A solution would be highly appreciated interms of x . I guess we can use same methods for solving ODEs but have to be careful because these are matrices

$R'(x)-(C_1 +C_2 x) R(x) = R_1-C_1 R_0\, x $

where except x rest are $3\times 3$ matrices means $C_1,C_2,R'(x),R(x)$ all are matrices. x is a scalar variable. $ C_1,C_2,R_0 $ are constant $3\times 3$ matrices . .$C_1$ and $C_2$ are skew symmetric matrices

- Solving a recurrence by using characteristic equation method
- pandigital rational approximations to the golden ratio and the base of the natural logarithm
- How to find the inverse of $n\log n$?
- What is the difference between the Big O and Big O star (asterisk) operator?
- Remez algorithm for best degree-0ne reduced polynomials with same endpoints
- How to count derangements?

- How to solve this first-order differential equation?
- Solve $(x^2 + 1)y'' - 6xy' + 10y =0$ using series method
- How to solve a tensor differential equation?
- eigen values and eigen vectors in case of matrixes and differential equations
- Computing irrational numbers
- Solutions to $f'=f$ over the rationals
- Solving a system of equations involving holomorphic functions
- How do Gap generate the elements in permutation groups?
- Sum of square of function
- Determine a stability region?

This answer split into 3 parts. The first part review the easy stuff one can do for general $n\times n$ matrices. The second part introduces some extra structure that help us to attack

the problem when $C_0$ and $C_1$ are $3\times 3$ skew symmetric matrices. The third part

demonstrate how to derive a closed form expression for the “exponential matrix” in this special case.

**Part I – the easy stuff for general $n$**

Let $\Omega$ be either $\mathbb{R}^n$ or $\mathcal{M}_{n\times n}(\mathbb{R})$, the space of real $n \times n$ matrices.

Let $A : [0,T] \to \mathcal{M}_{n\times n}(\mathbb{R})$ and $B : [0,T] \to \Omega$ be two functions defined on $[0,T]$ sufficiently regular so that following arguments make sense. Let $V_0 \in \Omega$. The problem at hand is find the function $V : [0,T] \to \Omega$ which solves the inhomogeneous initial valued problem for $t$ over $[0,T]$:

$$\frac{d}{dt} V(t) = A(t) V(t) + B(t)\quad\text{ subject to }\quad V(0) = V_0\tag{*1}$$

To simplify the problem, the standard approach is convert it from an inhomogeneous to a homogeneous one.

Let $U : [0,T] \to \mathcal{M}_{n\times n}(\mathbb{R})$ be the solution for the corresponding homogeneous initial value problem:

$$\frac{d}{dt} U(t) = A(t) U(t)\quad\text{ subject to }\quad\quad U(0) = I_n\tag{*2}$$

We can rewrite $(*1)$ as

$$AV + B = V’ = (U (U^{-1}V))’ = U'(U^{-1}V) + U(U^{-1}V)’ = AV + U(U^{-1}V)’$$

This leads to

$$( U^{-1}V )’ = U^{-1}B

\quad\iff\quad

U^{-1}(t)V(t) – V_0 = \int_0^t U^{-1}(s)B(s) ds$$

and hence an expression of $V(t)$ in terms of $U(t)$.

$$\bbox[8pt,border:1px solid blue]{

V(t) = U(t)\left[ V_0 + \int_0^t U^{-1}(s)B(s)ds \right]

}\tag{*3}$$

When $A(t)$ is sufficiently regular over $[0,T]$, one can show that $U(t)$ equals to

following limit

$$U(t) =

\lim_{N\to\infty}

\left( I_n + \delta A(N\delta)\right)

\left( I_n + \delta A((N-1)\delta)\right)

\cdots

\left( I_n + \delta A(\delta)\right)

$$

where $\delta$ has been used as a shorthand for $\frac{t}{N}.$

For any $X, Y \in \mathcal{M}_{n\times n}(\mathbb{R})$, let $[X,Y] = X Y – Y X$ be their commutator.

In the special case when $[ A(x_1), A(x_2) ] = 0_n$ for every $x_1, x_2 \in [0,t]$, we

can reorder the terms inside above complicated limit in any order we like. As a result,

the complicated limit reduces to something much simpler.

$$U(t) = \exp\left[\int_0^t A(s) ds\right]$$

For example, when $A(t) = C_0 + C_1 t$ where $C_0, C_1 \in \mathcal{M}_{n\times n}(\mathbb{R})$ satisfies

$[C_0,C_1] = 0_n$, we have

$$

U(t) = e^{C_0 t + \frac12 C_1 t^2}

\quad\implies\quad V(t) =

e^{C_0 t + \frac12 C_1 t^2}\left[V_0 + \int_0^t

e^{-(C_0 s + \frac12 C_1 s^2)} B(s) ds \right]

$$

For general $n$ and $A(t)$ where $[A(x_1), A(x_2)] \ne 0_n$, we won’t have such a nice formula. For most cases, the only way to get $U(t)$ is integrate $(*2)$ numerically.

**Part II – extra tools available at $n = 3$**

However, we are mostly interested in the special case $n = 3$ and $A(t) = C_0 + C_1 t$ with $C_0$, $C_1$ skew symmetric. In this special case, we can do better than numerical integration. Let us first concentrate on what sort of extra structures are available at $n = 3$ when $A(t)$ is only assumed to be skew symmetric.

Let $\mathcal{A}_{3\times 3}(\mathbb{R}) \subset \mathcal{M}_{3\times 3}(\mathbb{R})$ be the space of real skew symmetric $3\times 3$ matrices. We have

$$X, Y \in \mathcal{A}_{3\times 3}(\mathbb{R})

\quad\implies\quad [X,Y] = XY-YX \in \mathcal{A}_{3\times 3}(\mathbb{R})$$

With respect to the commutator, $\mathcal{A}_{3\times 3}$ is isomorphic to the Lie algebra

$so(3,\mathbb{R})$. $so(3,\mathbb{R})$ is the generators for the Rotation group

$SO(3,\mathbb{R})$, the group of orthogonal $3\times 3$ matrices with determinant $1$.

When $n = 3$ and $A(t)$ skew symmetric, what we are dealing with are nothing special but rotations in $\mathbb{R}^3$. In fact, $U(t)$ in $(*2)$ now take values in $SO(3,\mathbb{R}) \subset \mathcal{M}_{n\times n}(\mathbb{R})$.

$\mathcal{A}_{3\times 3}(\mathbb{R})$ is spanned by following 3 matrices.

$$

L_x = \begin{bmatrix}0&0&0\\0&0&-1\\0&1&0\end{bmatrix},\quad

L_y = \begin{bmatrix}0&0&1\\0&0&0\\-1&0&0\end{bmatrix}\quad\text{ and }\quad

L_z = \begin{bmatrix}0&-1&0\\1&0&0\\0&0&0\end{bmatrix}

$$

One can visualize $L_x$ as an infinitesimal rotation along the $x$-axis in counter-clockwise direction. There are similar interpretations for $L_y$ and $L_z$.

Let ${\bf i}, {\bf j}, {\bf k}$ be the canonical generators for the quaternions $\mathbb{H}$. Let

$$

\begin{align}

\mathbb{H}_u &= \{\; w + x{\bf i} + y{\bf j} + z{\bf k} \in \mathbb{H} : w^2 + x^2 + y^2 + z^2 = 1 \;\}\\

\mathbb{H}_i &= \{\; w + x{\bf i} + y{\bf j} + z{\bf k} \in \mathbb{H} : w = 0\;\}

\end{align}

$$

be the set of unit quaternions and pure imaginary quaternions respectively.

For convenience of indexing, let $e_1, e_2, e_3, L_1, L_2, L_3$ be aliases for ${\bf i}, {\bf j}, {\bf k}, L_x, L_y, L_z$ respectively.

Given any $X = (X_{ij})\in \mathcal{A}_{3\times 3}(\mathbb{R})$, expand it as $x_1 L_1 + x_2 L_2 + x_3 L_3$, we can associate with it a vector $\vec{X} \in \mathbb{R}^3$ and a pure imaginary quaternion $\tilde{X} \in \mathbb{H}_i$.

$$X = x_1L_1 + x_2L_2 + x_3L_3

\quad\leftrightarrow\quad

\vec{X} = (x_1, x_2, x_3)

\quad\leftrightarrow\quad

\tilde{X} = x_1 e_1 + x_2 e_2 + x_3 e_3

$$

It is easy to check $X$ and $\tilde{X}$ satisfies a set of identities

$$[ \tilde{X}, e_j ] = 2\sum_{i=1}^3 e_i X_{ij}, \quad j = 1,2,3.\tag{*4a}$$

In addition, we know there is a $2$ to $1$ parametrization of $SO(3,\mathbb{R})$ by

the unit quaternions $\mathbb{H}_u$

${}^{\color{blue}{[1]}}$.

More precisely, given any $O = ( O_{ij} ) \in SO(3,\mathbb{R})$, there exist a unique pair of $\pm Q \in \mathbb{H}_u$ such that

$$Q e_j \bar{Q} = \sum_{i=1}^3 e_i O_{ij}, \quad j = 1,2,3.\tag{*4b}$$

On the other direction, if $Q = w + x{\bf i} + y{\bf j} + z{\bf k} \in \mathbb{H}_u$ is given, one can recover the corresponding $O$ by the formula.

$$O = \begin{bmatrix}

1-2(y^2+z^2) & 2(xy-wz) & 2(xz+wy) \\

2(xy+wz) & 1 – 2(x^2+z^2) & 2(yz-wx)\\

2(zx-wy) & 2(yz+wx) & 1 – 2(x^2+y^2)

\end{bmatrix}$$

Let $\omega(t) = \frac12\tilde{A}(t)$ be half of the the quaternion corresponds to $A(t)$.

Let $q(t)$ be the quaternion appeared in parametrization in $U(t)$.

Using $(*4a)$ and $(*4b)$, we find $(*2)$ is equivalent to

an initial value problem over quaternions.

$$

\bbox[8pt,border:1px solid blue]{

\frac{d}{dt} q(t) = \omega(t) q(t)\quad\text{ subject to }\quad q(0) = 1}\tag{*5}

$$

On the surface, we have replaced the ODE on matrices by something equally horrible, an ODE on quaternions! In truth, this equivalence actually allow us to go beyond numerical integration in the original problem we want to solve.

**Part III – non-trivial closed forms**

From now on, let us assume $n = 3$, $A(t) = C_0 + C_1 t$ with $C_0, C_1 \in \mathcal{A}_{n\times n}(\mathbb{R})$ and $[C_0,C_1] \ne 0_3$.

If one differentiate $(*5)$ once more, we get

$$q”(t) = (\omega(t) q(t))’ = \omega'(t) q(t) + \omega(t) q'(t) = (\omega'(t) + \omega^2(t)) q(t)\tag{*6}$$

Notice

$$\begin{array}{rrcl}

&

\omega(t) \in \mathbb{H}_i & \implies &\omega^2(t) = -|\omega(t)|^2 \in \mathbb{R}\\

\text{ AND } &

A(t) = C_0 + C_1 t & \implies & \omega'(t) = \frac12\tilde{C}_1 \;\;\text{ is a constant }

\end{array}

$$

The imaginary part of the coefficient in RHS of $(*6)$ is a constant. This allows us to

express $q(t)$ using known special functions defined over $\mathbb{C}$.

As an illustration how this is done, consider the case $C_0 + C_1 t = 2(\alpha t L_x + \beta L_y)$ with $\alpha > 0$, $\beta \ne 0$. We have

$$\begin{align}

(*5)\quad\implies\quad & q'(t) = (\alpha t {\bf i} + \beta{\bf j})q(t) \quad\implies\quad

q'(0) = \beta{\bf j}\\

\\

(*6)\quad\implies\quad &

q”(t) + (\beta^2 + \alpha^2 t^2 – \alpha {\bf i})q(t) = 0

\end{align}

$$

Let $\phi_0, \phi_1 : [0,T] \to \mathbb{C}$ be the two fundamental solution of the ODE

${}^{\color{blue}{[2]}}$:

$$\phi”(t)+(\beta^2+\alpha^2 t^2 -\alpha i)\phi(t)=0\quad\text{ subject to }\quad

\begin{cases}

\phi_0(0) = 1,&\phi’_0(0) = 0\\

\\

\phi_1(0) = 0,&\phi’_1(0) = 1

\end{cases}\tag{*7}$$

Let $\eta : \mathbb{C} \to \mathbb{H}$ be the embedding of $\mathbb{C}$ into $\mathbb{H}$ by identifying $i \in \mathbb{C}$ with ${\bf i} \in \mathbb{H}$. It is clear both $\eta \circ \phi_0$ and $\eta \circ \phi_1$ are solutions of Equation $(*6)$. By comparing their

values and derivatives with those of $q(t)$, we can express $q(t)$ in terms of them:

$$\begin{align}

q(t)

&= \eta\circ\phi_0(t) + \beta \eta\circ\phi_1(t){\bf j}\\

&= \Re(\phi_0(t)) + \Im(\phi_0(t)){\bf i} + \beta\Re(\phi_1(t)){\bf j} + \beta\Im(\phi_1(t)){\bf k}

\end{align}\tag{*8}

$$

To solve $(*7)$, write $\phi(t)$ as $e^{i\alpha t^2/2} \psi(t)$, $(*7)$ becomes

$$\left[\left(\frac{d}{dt} + i\alpha t\right)^2 + (\alpha^2 t^2 + \beta^2 – i\alpha)\right]\psi(t) = 0

\quad\iff\quad \psi”(t) + 2i\alpha t \psi'(t) + \beta^2\psi(t) = 0

$$

The leftmost ODE is now something solvable using

Power series method. As a result, we find

$$\begin{align}

\phi_0(t) &= \;e^{i\alpha t^2/2}

{}_1F_1\left(\frac{\beta^2}{4i\alpha};\frac12;-i\alpha t^2\right)\\

\\

\phi_1(t) &= t e^{i\alpha t^2/2}

{}_1F_1\left(\frac{\beta^2}{4i\alpha}+\frac12;\frac32;-i\alpha t^2\right)\\

\end{align}

$$

where ${}_1F_1(a;b,z)$ is the confluent hypergeometric function.

Substitute this into $(*8)$ gives us a closed-form expression of $q(t)$ when $A(t) = 2(\alpha t L_x + \beta L_y)$.

$$\bbox[8pt,border:1px solid blue]{

q(t) = \;e^{{\bf i}\alpha t^2/2} \left[

{}_1F_1\left(\frac{\beta^2}{4{\bf i}\alpha};\frac12;-{\bf i}\alpha t^2\right)

+ {}_1F_1\left(\frac{\beta^2}{4{\bf i}\alpha}+\frac12;\frac32;-{\bf i}\alpha t^2\right) \beta t{\bf j} \right]

}$$

For general $C_0, C_1 \in \mathcal{A}_{n\times n}(\mathbb{R})$, there is always a rotation which rotate $\vec{C}_1$ into the direction of $\vec{L}_x$. However, there is no guarantee one can make $\vec{C}_0$ pointing along $\vec{L}_y$ at the same time.

When that happens, let $s$ be the number that minimize $|\vec{C}_0 + \vec{C}_1 s|$

and rewrite $A(t)$ as $(C_0 + C_1 s) + C_1 (t – s)$. It is clear the vector corresponds to $C_0 + C_1 s$ is perpendicular to that for $C_1$. This means there exists an

orthogonal matrix $O \in SO(3,\mathbb{R})$ such that

$$OA(t)O^{T} = 2(\alpha (t- s)L_x + \beta L_y)$$

for some $\alpha > 0, \beta \ne 0$.

Let $U_0(t)$ be the solution of the initial value problem

$$\frac{d}{dt} U_0(t) = 2(\alpha t L_x + \beta L_y) U_0(t)\quad\text{ subject to }\quad

U_0(0) = I_n$$

The solution $U(t)$ we want for the general $C_0, C_1$ will be given by the formula

$$U(t) = O^T U_0(t-s)U_0^{-1}(-s) O$$

**Notes**

- $\color{blue}{[1]}$ – see the wiki page of

Quaternions and Spatial rotation for related materials. - $\color{blue}{[2]}$ – the ODE is a scaled version of what so called Weber’s equation.

$$\frac{d^2f}{dx^2} + ( a \pm \frac14 z^2 )f = 0$$

Solutions of this type of ODE is known as Parabolic cylinder function or Weber-Hermite function. see this wiki page for more info.

My answer for this will overlap quite a bit with my answer for your earlier question but I’ll try to elaborate on the more specifically ODE and matrix aspects. My approach will look a bit different than @RRL’s, but the content is essentially the same (and I’m choosing my notation to emphasize that).

First, some notation and a review of the scalar case. Note that may write $$R'(x)-A(x) R(x) = G(x)$$ where we have defined $A(x) = C_1+C_2 x$ and $G(x) = R_1 – C_0 x$. In the scalar case, we could then observe straightforwardly observe that the LHS has an integrating factor $\Phi(x)=\exp\left[-\int_0^{x} A(x’) dx’\right]$. (In your case, $\Phi(x)=e^{-C_1 x-C_2 x^2/2}$; I have taken the lower bound to match the boundary condition at $x=0$ from the original problem). What that means is we may write

$$ \frac{d}{dx} \Phi(x) R(x) = \Phi(x)R'(x)+\Phi'(x) R(x) = \Phi(x)\left(R'(x)-A(x)R(x)\right)=\Phi(x) G(x)\hspace{.4cm}\text{(1)}$$ where we have exploited the fact that $\Phi'(x) = -\Phi(x) A(x)$ from the fundamental theorem of calculus. We can then integrate both sides from $0$ to $x$ and rearrange both sides to obtain the formal solution

$$R(x)=\Phi^{-1}(x)\left[R(0) + \int_0^{x} \Phi(x’)G(x’)\,dx’\right] =\Phi^{-1}(x)R(0) + \int_0^{x} \Phi(x’-x)G(x’)\,dx’\hspace{.4cm}\text{(2)}$$ where in the second equality we have taken advantage of $\Phi(x)$ being the exponent of a definite integral. So we’ve converted the problem of finding $R(x)$ to computing a particular integral.

The question is, what changes when we go to matrices in the coefficients? An obvious issue is that $\Phi(x)$ now will be the exponential of a matrix-valued function. This isn’t a knock-down problem, though: define the matrix exponential as the infinite sum $\exp(A) =\sum_{k=0}^\infty A^k/k!$, and observe that one still has the property $$\frac{d}{dx} \exp{(x A)}=\frac{d}{dx}\sum_{k=0}^\infty A^k \frac{x^k}{k!}=\sum_{k=1}^\infty A^k \frac{x^{k-1}}{(k-1)!} = A\cdot \exp(x A).$$ As a consequence, the $\Phi(x)$ defined earlier has the same derivative and so equation (1) is still valid. The same moreover holds for equation (2), with $\Phi(x)^{-1}$ understood to be the inverse matrix (the second equality is now valid only if $A(x)$ commutes with $A(x’)$).

- Limit of the sequence of regular n-gons.
- Why isn't the Ito integral just the Riemann-Stieltjes integral?
- How to prove those “curious identities”?
- How to solve this derivative of f proof?
- Solving this integral $\int\frac{1}{1+x^n} dx$?
- If $R=\{(x,y): x\text{ is wife of } y\}$, then is $R$ transitive?
- Computing the order of $dx$ at the infinity point of an elliptic curve
- General rule for the integral of $\frac1{x^n}$
- The asymptotic expansion for the weighted sum of divisors $\sum_{n\leq x} \frac{d(n)}{n}$
- Disjoint Cycle Question
- If $f \in L^{p_1}(E) $ is bounded then $f\in L^{p_2}$ for any $p_2>p_1$
- Least Impossible Subset Sum
- Functions such that $f(f(n))=n+2015$
- Reflection across a line?
- Proportion of cube closer to centre than outside