Intereting Posts

After swapping the positions of the hour and the minute hand, when will a clock still give a valid time?
Question about Holomorphic functions
PDF of $f(x)=1/\sin(x)$?
Prove that any family of disjoint 8-signs on the plane is countable
Question about proof of Hessenberg: $\kappa \cdot \lambda = \lambda$
Proof by induction – correct inductive step?
What is mean by basis of a vector space?
Special linear group as a submanifold of $M(n, \mathbb R)$
Neighborhoods: Interior
Covariance of product of two functions of two binomial distributions
Order of the smallest group containing all groups of order $n$ as subgroups.
Motivation behind the definition of tangent vectors
Polygon and Pigeon Hole Principle Question
How is the Inverse Fourier Transform derived from the Fourier Transform?
The last two digits of $9^{9^9}$

Four by four real (well, floating point…) matrices are used in computer graphics to represent projections. Sometimes we need to compute their inverses. How many multiplications are required?

- Valve’s Source SDK implements the “pen&paper” algorithm: start with a 4×8 matrix whose left hand side is $A$ (the matrix to invert) and whose right hand side is $I_4$ (the 4×4 identity matrix); apply Gauss moves until the left hand side is $I_4$ and the right hand will be $A^{-1}$. This algorithm requires (by inspecting the code) $4 \cdot 8 + 4 \cdot 4 \cdot 8 = 160$ multiplications.
- Solving $AX = I$ applying Cramer’s rule for each element and computing each 3×3 determinant with Sarrus’s rule improves on that: we need $6 \cdot 16$ multiplications for the cofactors, then $4$ more for the determinant of the 4×4 matrix, plus $16$ multiplications for the inverse of that, for a total of $6 \cdot 16 + 4 + 16 = 116$ multiplications.
- Cleverly precalculating products of two elements as in this answer requires (again inspecting the code) $2\cdot 12 + 6 + 4 \cdot 16 = 94$ multiplications.

Can we do better? Can we prove we can’t?

- Find the value of $a^2-b^2+c^2$
- Given two subspaces $U,W$ of vector space $V$, how to show that $\dim(U)+\dim(W)=\dim(U+W)+\dim(U\cap W)$
- Row reduction and the characteristic polynomial of a matrix
- On the eigenvalues of a linear transformation $\tau$ such that $\tau^3 = \mathrm{id}$
- Generic topology on a vector space?
- Can you use a logarithm coefficient in a linear equation?

- Proof of when is $A=X^TX$ invertible?
- Show that a matrix $A$ is singular if and only if $0$ is an eigenvalue.
- Adjoint of a linear transformation in an infinite dimension inner product space
- Linearly independent set can be completed to a basis
- Dot product for 3 vectors
- The Determinant of an Operator Equals the Product of Determinants of its Restrictions
- Linear algebra: Dimension and direct sum
- Is $A + A^{-1}$ always invertible?
- Basis of the polynomial vector space
- How many Jordan normal forms are there for this characteristic polynomial?

We can invert a 2×2 matrix with 6 multiplies and one inversion. By Strassen’s algorithm, we can multiply 2×2 matrices with 7 multiplies.

Partition your 4×4 matrix as

$$\left( \begin{matrix} A & B \\ C & D \end{matrix} \right) $$

We will reduce this to the identity.

Compute the inverse of $A$.

Perform Gaussian elimination

$$\left( \begin{matrix} I & A^{-1} B \\ 0 & D – CA^{-1}B \end{matrix} \right) $$

Let $E = D – C A^{-1} B$. Now compute $E^{-1}$. We can now finish Gaussian elimination.

Repeating these operations on an identity matrix tells us that the inverse is

$$ \left( \begin{matrix} A^{-1} + A^{-1} B E^{-1} C A^{-1} & -A^{-1} B E^{-1}

\\ -E^{-1} C A^{-1} & E^{-1} \end{matrix} \right) $$

The calculation of this inverse requires two matrix inversions (12 multiplies and 2 real inversions), and six 2×2 multiplies:

- $C A^{-1}$
- $(C A^{-1}) B$
- $E^{-1} (C A^{-1})$
- $A^{-1} B$
- $(A^{-1} B) E^{-1}$
- $(A^{-1} B)(E^{-1} C A^{-1})$

for 54 multiplies and 2 real inversions in all.

Of course, using Strassen’s algorithm for 2×2 matrices is a terrible idea. I don’t know if the above organization of the calculation is reasonable or not even if you don’t use Strasses algorithm; I suspect it is unlikely.

If $A$ turns out to be singular, you have to partition the matrix differently to do the calculation.

In his paper Gaussian Elimination is not optimal, Volker Strassen stated the following:

A more recent treatment of matrix inversion is provided by Intel. However, they are aiming at speed rather than at the minimum number of multiplications.

- fair and unfair games
- Finding limit of a quotient
- Probability problem
- Solving base e equation $e^x – e^{-x} = 0$
- Powers of elements and subgroups
- Under what conditions is a zero divisor element $a$ in commutative ring $R$ nilpotent?
- With $N$ a constant $>0$, show $\prod_{p<x}\frac{1}{p^{N+1}-1}>\frac{0.2}{\log^2 x}$.
- Solving this summation: $\sum_{i=1}^k i\cdot2^i$
- Proving that “Every non-trivial ring (i.e. with more than one element ) with unity has a maximal ideal” implies axiom of choice is true
- A variation of Nim game
- Prove inequality $\sqrt{\frac{1}n}-\sqrt{\frac{2}n}+\sqrt{\frac{3}n}-\cdots+\sqrt{\frac{4n-3}n}-\sqrt{\frac{4n-2}n}+\sqrt{\frac{4n-1}n}>1$
- Why does $\lim_{n\to\infty} z_n=A$ imply $\lim_{n\to\infty}\frac{1}{n}(z_1+\cdots+z_n)=A$?
- Finding all solutions to a Diophantine equation involving partial sums of a given series
- By expanding $e^x$ into a series prove the following inequality
- Why $PSL_3(\mathbb F_2)\cong PSL_2(\mathbb F_7)$?