Block diagonalizing two matrices simultaneously

There are two matrices $A$ and $B$ which can not be diagonalized simultaneously. Is it possible to block diagonalize them? What if the matrices have an special pattern?
Physics of the problem is that, I have $2$ layers of atoms where $A$ is connectivity within the layer $1$ itself and $B$ is connectivity between layer $1$ and $2$.

By block diagoanl matrix I mean a matrix which its diagonals are some matrices with size $M \times M$ (where M is much smaller than the actual size of matrix but larger than 1)
For example a $3 \times 3$ block diagonal matrix
[ \begin{array}{ccc}
[a] & [0] & [0] \\
[0] & [b] & [0]\\
[0] & [0] & [c] \end{array}
]

I have attached the figures of sparsity pattern of an example $A$ and $B$.
enter image description here
enter image description here

Solutions Collecting From Web of "Block diagonalizing two matrices simultaneously"

I will propose a method for finding the optimal simultaneous block-diagonalization of two matrices $A$ and $B$, assuming that $A$ is diagonalizable (and eigenvalues are not too degenerate).
(Something similar may work with the Jordan normal form of $A$ as well.)
By optimal I mean that none of the corresponding blocks are simultaneously block-diagonalizable.
Note that using the trivial block-diagonalization $A=(A)$ and $B=(B)$ this can always be done.

Let me start by verifying Victor Liu’s comment that the problem is equivalent to expressing your vector space as a direct sum of invariant spaces.
I will write $C\oplus D=\begin{pmatrix}C&0\\0&D\end{pmatrix}$ and similarly for several matrices.
I assume that the matrices operate in the vector space $V=\mathbb R^n$ (or $\mathbb C^n$, this is irrelevant).
If the two matrices $A$ and $B$ can be block-diagonalized simultaneously, $A=A_1\oplus A_2$ and $B=B_1\oplus B_2$, then $V$ is a direct sum $V=V_1\oplus V_2$, where the spaces $V_i$ are invariant under $A$ and $B$.
A subspace $U\subset V$ is said to be invariant under $A$ if $A(U)\subset U$.
Similarly, if $V$ is a direct sum of subspaces invariant under both $A$ and $B$, then $A$ and $B$ can be simultaneously block-diagonalized.

Now what are the subspaces of $V$ that are invariant with respect to both $A$ and $B$?
Suppose $U\neq\{0\}$ is such a subspace.
Suppose furthermore that there is another subspace $U’\subset V$ so that $U$ and $U’$ only intersect at the origin but they span all of $V$: $V=U\oplus U’$.
(In other words, $U$ is an invariant subspace complementable by an invariant subspace. The spaces in the decomposition need to be of this type.)

Now $U$ is invariant under $A$.
Let us first diagonalize $A$.
Let $\lambda_1,\dots,\lambda_m$ be the eigenvalues and $V_1,\dots,V_m$ the corresponding eigenspaces.
We can thus write $V$ as a direct sum $V=V_1\oplus\dots\oplus V_m$.
Since $U$, it is necessarily of the form $U=U_1\oplus\dots\oplus U_m$, where $U_i$ is a subspace of $V_i$ (can be zero or the whole thing).

Suppose one of the eigenvalues of $A$, $\lambda_1$, is simple (perhaps the ground state if you have a quantum mechanical system).
Then $V_1=\langle v_1\rangle$ has to belong to $U$ or its complementing space; suppose it belongs to $U$.
We also need to have $Bv_1\in U$.
Now $Bv_1=x_1\oplus\dots\oplus x_m\in V_1\oplus\dots\oplus V_m$.
By the direct sum form of $U$, we must have $x_i\in U$ for all $i$.
Now we have a list of new vectors $x_1,\dots,x_m$ in $U$.
For each of them we can run the same argument: $Bx_i=\dots\in V_1\oplus\dots\oplus V_m$ must be in $U$ and so must each of its components (w.r.t. the eigenspace decomposition).

Differently put, if $\pi_I:V\to V_i$ is the orthogonal projection to the eigenspace, the following is true: If $u\in U$, then $\pi_iBu\in U$ for all $i$.
It can be enlightening to calculate the matrices $\pi_iBu$.
(You could probably see something at glance, but I have not developed intuition for it.)

This is a (at least computationally feasible) method to find an invariant subspace $U$ containing $V_1$.
It can well happen that $U=V$ and only the trivial simultaneous block-diagonalization exists.
If not, after some amount of iteration you will notice that the above procedure no longer produces more linearly independent vectors and $U$ is a nontrivial invariant subspace.

It does not yet follow that $U$ would be complementable by an invariant subspace.
Assuming there is another simple eigenvalue (whose eigenspace is not contained in $U$), you can start the same construction with that one and produce a new invariant subspace $U’$.
You will automatically have $U\cap U’=\{0\}$.
You can also start at any eigenvector of $B$ corresponding to a simple eigenvalue.
If you somehow know that a particular vector needs to be in an invariant subspace, you can use it as well.
(But be careful: an eigenvector corresponding to a degenerate eigenvalue need not be in an invariant subspace.)
With this method you can generate some amount of invariant subspaces.
If these subspaces span the whole space, the corresponding block-diagonalization is optimal.
If they do not span all of $V$, at least the rest of the problem is easier.
(Note that even if $A$ is symmetric, a complementing invariant subspace need not be orthogonal to $U$.)

This is not a complete answer to your question, but perhaps you can solve your problem with something along these lines.
And do ask if you need clarification.