Intereting Posts

Expected area of triangle formed by three random points inside unit circle
{line bundles} $\neq$ {divisor line bundles}
How to solve this determinant
Simplify $\prod_{k=1}^5\tan\frac{k\pi}{11}$ and $\sum_{k=1}^5\tan^2\frac{k\pi}{11}$
Volterra integral equation with variable boundaries
Show: $ f(a) = a,\ f(b) = b \implies \int_a^b \left \, \mathrm{d}x = b^2 – a^2 $
Complete induction proof that every $n > 1$ can be written as a product of primes
Prove that the integral operator has no eigenvalues
Dissecting an equilateral triangle into equilateral triangles of pairwise different sizes
Prove that $2\sum_{k=1}^n\cos kθ= \frac{\sin\left(n+\frac12\right)θ}{\sin\fracθ2}-1$
Further Reading on Stochastic Calculus/Analysis
Is every nonzero integer the discriminant of some algebraic number field?
Has this phenomenon been discovered and named?
Finding the 2,147,483,647th prime number
The Dual Pairing

I have an inhomogenous ODE. The main issue here is variables are matrices. It is bit of matrix calculus. A solution would be highly appreciated interms of x . I guess we can use same methods for solving ODEs but have to be careful because these are matrices

$R'(x)-(C_1 +C_2 x) R(x) = R_1-C_1 R_0\, x $

where except x rest are $3\times 3$ matrices means $C_1,C_2,R'(x),R(x)$ all are matrices. x is a scalar variable. $ C_1,C_2,R_0 $ are constant $3\times 3$ matrices . .$C_1$ and $C_2$ are skew symmetric matrices

- Simplifying polynomials
- Worst case analysis of MAX-HEAPIFY procedure .
- Rabin and Shallit Algorithm
- How to use Warshall's Algorithm
- Fastest Square Root Algorithm
- Simple explanation and examples of the Miller-Rabin Primality Test

- Method of reduction
- Algorithm wanted: Enumerate all subsets of a set in order of increasing sums
- Mathematically representing combinations with integers uniquely?
- What type of discontinuity is $\sin(1/x)$?
- particular solution of $4y''-y= \sin(x)\cdot \cos(x/2)$
- Finding a (small) prime great enough that there are at least m elements of order m
- Knuth's algorithm for Mastermind question
- Graph Run Time, Nodes and edges.
- Proof Strategy for a Dynamical System of Points on the Plane
- Finding the asymptotic behavior of the recurrence $T(n)=4T(\frac{n}{2})+n^2$ by using substitution method

This answer split into 3 parts. The first part review the easy stuff one can do for general $n\times n$ matrices. The second part introduces some extra structure that help us to attack

the problem when $C_0$ and $C_1$ are $3\times 3$ skew symmetric matrices. The third part

demonstrate how to derive a closed form expression for the “exponential matrix” in this special case.

**Part I – the easy stuff for general $n$**

Let $\Omega$ be either $\mathbb{R}^n$ or $\mathcal{M}_{n\times n}(\mathbb{R})$, the space of real $n \times n$ matrices.

Let $A : [0,T] \to \mathcal{M}_{n\times n}(\mathbb{R})$ and $B : [0,T] \to \Omega$ be two functions defined on $[0,T]$ sufficiently regular so that following arguments make sense. Let $V_0 \in \Omega$. The problem at hand is find the function $V : [0,T] \to \Omega$ which solves the inhomogeneous initial valued problem for $t$ over $[0,T]$:

$$\frac{d}{dt} V(t) = A(t) V(t) + B(t)\quad\text{ subject to }\quad V(0) = V_0\tag{*1}$$

To simplify the problem, the standard approach is convert it from an inhomogeneous to a homogeneous one.

Let $U : [0,T] \to \mathcal{M}_{n\times n}(\mathbb{R})$ be the solution for the corresponding homogeneous initial value problem:

$$\frac{d}{dt} U(t) = A(t) U(t)\quad\text{ subject to }\quad\quad U(0) = I_n\tag{*2}$$

We can rewrite $(*1)$ as

$$AV + B = V’ = (U (U^{-1}V))’ = U'(U^{-1}V) + U(U^{-1}V)’ = AV + U(U^{-1}V)’$$

This leads to

$$( U^{-1}V )’ = U^{-1}B

\quad\iff\quad

U^{-1}(t)V(t) – V_0 = \int_0^t U^{-1}(s)B(s) ds$$

and hence an expression of $V(t)$ in terms of $U(t)$.

$$\bbox[8pt,border:1px solid blue]{

V(t) = U(t)\left[ V_0 + \int_0^t U^{-1}(s)B(s)ds \right]

}\tag{*3}$$

When $A(t)$ is sufficiently regular over $[0,T]$, one can show that $U(t)$ equals to

following limit

$$U(t) =

\lim_{N\to\infty}

\left( I_n + \delta A(N\delta)\right)

\left( I_n + \delta A((N-1)\delta)\right)

\cdots

\left( I_n + \delta A(\delta)\right)

$$

where $\delta$ has been used as a shorthand for $\frac{t}{N}.$

For any $X, Y \in \mathcal{M}_{n\times n}(\mathbb{R})$, let $[X,Y] = X Y – Y X$ be their commutator.

In the special case when $[ A(x_1), A(x_2) ] = 0_n$ for every $x_1, x_2 \in [0,t]$, we

can reorder the terms inside above complicated limit in any order we like. As a result,

the complicated limit reduces to something much simpler.

$$U(t) = \exp\left[\int_0^t A(s) ds\right]$$

For example, when $A(t) = C_0 + C_1 t$ where $C_0, C_1 \in \mathcal{M}_{n\times n}(\mathbb{R})$ satisfies

$[C_0,C_1] = 0_n$, we have

$$

U(t) = e^{C_0 t + \frac12 C_1 t^2}

\quad\implies\quad V(t) =

e^{C_0 t + \frac12 C_1 t^2}\left[V_0 + \int_0^t

e^{-(C_0 s + \frac12 C_1 s^2)} B(s) ds \right]

$$

For general $n$ and $A(t)$ where $[A(x_1), A(x_2)] \ne 0_n$, we won’t have such a nice formula. For most cases, the only way to get $U(t)$ is integrate $(*2)$ numerically.

**Part II – extra tools available at $n = 3$**

However, we are mostly interested in the special case $n = 3$ and $A(t) = C_0 + C_1 t$ with $C_0$, $C_1$ skew symmetric. In this special case, we can do better than numerical integration. Let us first concentrate on what sort of extra structures are available at $n = 3$ when $A(t)$ is only assumed to be skew symmetric.

Let $\mathcal{A}_{3\times 3}(\mathbb{R}) \subset \mathcal{M}_{3\times 3}(\mathbb{R})$ be the space of real skew symmetric $3\times 3$ matrices. We have

$$X, Y \in \mathcal{A}_{3\times 3}(\mathbb{R})

\quad\implies\quad [X,Y] = XY-YX \in \mathcal{A}_{3\times 3}(\mathbb{R})$$

With respect to the commutator, $\mathcal{A}_{3\times 3}$ is isomorphic to the Lie algebra

$so(3,\mathbb{R})$. $so(3,\mathbb{R})$ is the generators for the Rotation group

$SO(3,\mathbb{R})$, the group of orthogonal $3\times 3$ matrices with determinant $1$.

When $n = 3$ and $A(t)$ skew symmetric, what we are dealing with are nothing special but rotations in $\mathbb{R}^3$. In fact, $U(t)$ in $(*2)$ now take values in $SO(3,\mathbb{R}) \subset \mathcal{M}_{n\times n}(\mathbb{R})$.

$\mathcal{A}_{3\times 3}(\mathbb{R})$ is spanned by following 3 matrices.

$$

L_x = \begin{bmatrix}0&0&0\\0&0&-1\\0&1&0\end{bmatrix},\quad

L_y = \begin{bmatrix}0&0&1\\0&0&0\\-1&0&0\end{bmatrix}\quad\text{ and }\quad

L_z = \begin{bmatrix}0&-1&0\\1&0&0\\0&0&0\end{bmatrix}

$$

One can visualize $L_x$ as an infinitesimal rotation along the $x$-axis in counter-clockwise direction. There are similar interpretations for $L_y$ and $L_z$.

Let ${\bf i}, {\bf j}, {\bf k}$ be the canonical generators for the quaternions $\mathbb{H}$. Let

$$

\begin{align}

\mathbb{H}_u &= \{\; w + x{\bf i} + y{\bf j} + z{\bf k} \in \mathbb{H} : w^2 + x^2 + y^2 + z^2 = 1 \;\}\\

\mathbb{H}_i &= \{\; w + x{\bf i} + y{\bf j} + z{\bf k} \in \mathbb{H} : w = 0\;\}

\end{align}

$$

be the set of unit quaternions and pure imaginary quaternions respectively.

For convenience of indexing, let $e_1, e_2, e_3, L_1, L_2, L_3$ be aliases for ${\bf i}, {\bf j}, {\bf k}, L_x, L_y, L_z$ respectively.

Given any $X = (X_{ij})\in \mathcal{A}_{3\times 3}(\mathbb{R})$, expand it as $x_1 L_1 + x_2 L_2 + x_3 L_3$, we can associate with it a vector $\vec{X} \in \mathbb{R}^3$ and a pure imaginary quaternion $\tilde{X} \in \mathbb{H}_i$.

$$X = x_1L_1 + x_2L_2 + x_3L_3

\quad\leftrightarrow\quad

\vec{X} = (x_1, x_2, x_3)

\quad\leftrightarrow\quad

\tilde{X} = x_1 e_1 + x_2 e_2 + x_3 e_3

$$

It is easy to check $X$ and $\tilde{X}$ satisfies a set of identities

$$[ \tilde{X}, e_j ] = 2\sum_{i=1}^3 e_i X_{ij}, \quad j = 1,2,3.\tag{*4a}$$

In addition, we know there is a $2$ to $1$ parametrization of $SO(3,\mathbb{R})$ by

the unit quaternions $\mathbb{H}_u$

${}^{\color{blue}{[1]}}$.

More precisely, given any $O = ( O_{ij} ) \in SO(3,\mathbb{R})$, there exist a unique pair of $\pm Q \in \mathbb{H}_u$ such that

$$Q e_j \bar{Q} = \sum_{i=1}^3 e_i O_{ij}, \quad j = 1,2,3.\tag{*4b}$$

On the other direction, if $Q = w + x{\bf i} + y{\bf j} + z{\bf k} \in \mathbb{H}_u$ is given, one can recover the corresponding $O$ by the formula.

$$O = \begin{bmatrix}

1-2(y^2+z^2) & 2(xy-wz) & 2(xz+wy) \\

2(xy+wz) & 1 – 2(x^2+z^2) & 2(yz-wx)\\

2(zx-wy) & 2(yz+wx) & 1 – 2(x^2+y^2)

\end{bmatrix}$$

Let $\omega(t) = \frac12\tilde{A}(t)$ be half of the the quaternion corresponds to $A(t)$.

Let $q(t)$ be the quaternion appeared in parametrization in $U(t)$.

Using $(*4a)$ and $(*4b)$, we find $(*2)$ is equivalent to

an initial value problem over quaternions.

$$

\bbox[8pt,border:1px solid blue]{

\frac{d}{dt} q(t) = \omega(t) q(t)\quad\text{ subject to }\quad q(0) = 1}\tag{*5}

$$

On the surface, we have replaced the ODE on matrices by something equally horrible, an ODE on quaternions! In truth, this equivalence actually allow us to go beyond numerical integration in the original problem we want to solve.

**Part III – non-trivial closed forms**

From now on, let us assume $n = 3$, $A(t) = C_0 + C_1 t$ with $C_0, C_1 \in \mathcal{A}_{n\times n}(\mathbb{R})$ and $[C_0,C_1] \ne 0_3$.

If one differentiate $(*5)$ once more, we get

$$q”(t) = (\omega(t) q(t))’ = \omega'(t) q(t) + \omega(t) q'(t) = (\omega'(t) + \omega^2(t)) q(t)\tag{*6}$$

Notice

$$\begin{array}{rrcl}

&

\omega(t) \in \mathbb{H}_i & \implies &\omega^2(t) = -|\omega(t)|^2 \in \mathbb{R}\\

\text{ AND } &

A(t) = C_0 + C_1 t & \implies & \omega'(t) = \frac12\tilde{C}_1 \;\;\text{ is a constant }

\end{array}

$$

The imaginary part of the coefficient in RHS of $(*6)$ is a constant. This allows us to

express $q(t)$ using known special functions defined over $\mathbb{C}$.

As an illustration how this is done, consider the case $C_0 + C_1 t = 2(\alpha t L_x + \beta L_y)$ with $\alpha > 0$, $\beta \ne 0$. We have

$$\begin{align}

(*5)\quad\implies\quad & q'(t) = (\alpha t {\bf i} + \beta{\bf j})q(t) \quad\implies\quad

q'(0) = \beta{\bf j}\\

\\

(*6)\quad\implies\quad &

q”(t) + (\beta^2 + \alpha^2 t^2 – \alpha {\bf i})q(t) = 0

\end{align}

$$

Let $\phi_0, \phi_1 : [0,T] \to \mathbb{C}$ be the two fundamental solution of the ODE

${}^{\color{blue}{[2]}}$:

$$\phi”(t)+(\beta^2+\alpha^2 t^2 -\alpha i)\phi(t)=0\quad\text{ subject to }\quad

\begin{cases}

\phi_0(0) = 1,&\phi’_0(0) = 0\\

\\

\phi_1(0) = 0,&\phi’_1(0) = 1

\end{cases}\tag{*7}$$

Let $\eta : \mathbb{C} \to \mathbb{H}$ be the embedding of $\mathbb{C}$ into $\mathbb{H}$ by identifying $i \in \mathbb{C}$ with ${\bf i} \in \mathbb{H}$. It is clear both $\eta \circ \phi_0$ and $\eta \circ \phi_1$ are solutions of Equation $(*6)$. By comparing their

values and derivatives with those of $q(t)$, we can express $q(t)$ in terms of them:

$$\begin{align}

q(t)

&= \eta\circ\phi_0(t) + \beta \eta\circ\phi_1(t){\bf j}\\

&= \Re(\phi_0(t)) + \Im(\phi_0(t)){\bf i} + \beta\Re(\phi_1(t)){\bf j} + \beta\Im(\phi_1(t)){\bf k}

\end{align}\tag{*8}

$$

To solve $(*7)$, write $\phi(t)$ as $e^{i\alpha t^2/2} \psi(t)$, $(*7)$ becomes

$$\left[\left(\frac{d}{dt} + i\alpha t\right)^2 + (\alpha^2 t^2 + \beta^2 – i\alpha)\right]\psi(t) = 0

\quad\iff\quad \psi”(t) + 2i\alpha t \psi'(t) + \beta^2\psi(t) = 0

$$

The leftmost ODE is now something solvable using

Power series method. As a result, we find

$$\begin{align}

\phi_0(t) &= \;e^{i\alpha t^2/2}

{}_1F_1\left(\frac{\beta^2}{4i\alpha};\frac12;-i\alpha t^2\right)\\

\\

\phi_1(t) &= t e^{i\alpha t^2/2}

{}_1F_1\left(\frac{\beta^2}{4i\alpha}+\frac12;\frac32;-i\alpha t^2\right)\\

\end{align}

$$

where ${}_1F_1(a;b,z)$ is the confluent hypergeometric function.

Substitute this into $(*8)$ gives us a closed-form expression of $q(t)$ when $A(t) = 2(\alpha t L_x + \beta L_y)$.

$$\bbox[8pt,border:1px solid blue]{

q(t) = \;e^{{\bf i}\alpha t^2/2} \left[

{}_1F_1\left(\frac{\beta^2}{4{\bf i}\alpha};\frac12;-{\bf i}\alpha t^2\right)

+ {}_1F_1\left(\frac{\beta^2}{4{\bf i}\alpha}+\frac12;\frac32;-{\bf i}\alpha t^2\right) \beta t{\bf j} \right]

}$$

For general $C_0, C_1 \in \mathcal{A}_{n\times n}(\mathbb{R})$, there is always a rotation which rotate $\vec{C}_1$ into the direction of $\vec{L}_x$. However, there is no guarantee one can make $\vec{C}_0$ pointing along $\vec{L}_y$ at the same time.

When that happens, let $s$ be the number that minimize $|\vec{C}_0 + \vec{C}_1 s|$

and rewrite $A(t)$ as $(C_0 + C_1 s) + C_1 (t – s)$. It is clear the vector corresponds to $C_0 + C_1 s$ is perpendicular to that for $C_1$. This means there exists an

orthogonal matrix $O \in SO(3,\mathbb{R})$ such that

$$OA(t)O^{T} = 2(\alpha (t- s)L_x + \beta L_y)$$

for some $\alpha > 0, \beta \ne 0$.

Let $U_0(t)$ be the solution of the initial value problem

$$\frac{d}{dt} U_0(t) = 2(\alpha t L_x + \beta L_y) U_0(t)\quad\text{ subject to }\quad

U_0(0) = I_n$$

The solution $U(t)$ we want for the general $C_0, C_1$ will be given by the formula

$$U(t) = O^T U_0(t-s)U_0^{-1}(-s) O$$

**Notes**

- $\color{blue}{[1]}$ – see the wiki page of

Quaternions and Spatial rotation for related materials. - $\color{blue}{[2]}$ – the ODE is a scaled version of what so called Weber’s equation.

$$\frac{d^2f}{dx^2} + ( a \pm \frac14 z^2 )f = 0$$

Solutions of this type of ODE is known as Parabolic cylinder function or Weber-Hermite function. see this wiki page for more info.

My answer for this will overlap quite a bit with my answer for your earlier question but I’ll try to elaborate on the more specifically ODE and matrix aspects. My approach will look a bit different than @RRL’s, but the content is essentially the same (and I’m choosing my notation to emphasize that).

First, some notation and a review of the scalar case. Note that may write $$R'(x)-A(x) R(x) = G(x)$$ where we have defined $A(x) = C_1+C_2 x$ and $G(x) = R_1 – C_0 x$. In the scalar case, we could then observe straightforwardly observe that the LHS has an integrating factor $\Phi(x)=\exp\left[-\int_0^{x} A(x’) dx’\right]$. (In your case, $\Phi(x)=e^{-C_1 x-C_2 x^2/2}$; I have taken the lower bound to match the boundary condition at $x=0$ from the original problem). What that means is we may write

$$ \frac{d}{dx} \Phi(x) R(x) = \Phi(x)R'(x)+\Phi'(x) R(x) = \Phi(x)\left(R'(x)-A(x)R(x)\right)=\Phi(x) G(x)\hspace{.4cm}\text{(1)}$$ where we have exploited the fact that $\Phi'(x) = -\Phi(x) A(x)$ from the fundamental theorem of calculus. We can then integrate both sides from $0$ to $x$ and rearrange both sides to obtain the formal solution

$$R(x)=\Phi^{-1}(x)\left[R(0) + \int_0^{x} \Phi(x’)G(x’)\,dx’\right] =\Phi^{-1}(x)R(0) + \int_0^{x} \Phi(x’-x)G(x’)\,dx’\hspace{.4cm}\text{(2)}$$ where in the second equality we have taken advantage of $\Phi(x)$ being the exponent of a definite integral. So we’ve converted the problem of finding $R(x)$ to computing a particular integral.

The question is, what changes when we go to matrices in the coefficients? An obvious issue is that $\Phi(x)$ now will be the exponential of a matrix-valued function. This isn’t a knock-down problem, though: define the matrix exponential as the infinite sum $\exp(A) =\sum_{k=0}^\infty A^k/k!$, and observe that one still has the property $$\frac{d}{dx} \exp{(x A)}=\frac{d}{dx}\sum_{k=0}^\infty A^k \frac{x^k}{k!}=\sum_{k=1}^\infty A^k \frac{x^{k-1}}{(k-1)!} = A\cdot \exp(x A).$$ As a consequence, the $\Phi(x)$ defined earlier has the same derivative and so equation (1) is still valid. The same moreover holds for equation (2), with $\Phi(x)^{-1}$ understood to be the inverse matrix (the second equality is now valid only if $A(x)$ commutes with $A(x’)$).

- Is there any difference between mapping and function?
- Using generating functions to find a formula for the Tower of Hanoi numbers
- An algorithm for making conditionally convergent series take arbitrary values?
- Difficult Integral: $\int\frac{x^n}{\sqrt{1+x^2}}dx$
- Compute discrete logarithm
- Solutions of the congruence $x^2 \equiv 1 \pmod{m}$
- Can the intersection of open or closed balls be empty, if their radii are bounded from below?
- Bezout in $\mathbb C $
- Are there examples of when the ILATE mnemonic for choosing factors when integrating by parts fails?
- Methods for showing three points in $\mathbb{R}^2$ are colinear (or not)
- Why can the intersection of infinite open sets be closed?
- What is the purpose of Stirling's approximation to a factorial?
- How to state Pythagorean theorem in a neutral synthetic geometry?
- Why is F a UFD?
- Independence of Roots