Symmetry Of Differentiation Matrix

I have a problem computing numerically the eigenvalues of Laplace-Beltrami operator. I use meshfree Radial Basis Functions (RBF) approach to construct differentiation matrix $D$. Testing my code on simple manifolds (e.g. on a unit sphere) shows that computed eigenvalues of $D$ often have non-zero imaginary parts, even though they should all be real.


(For simplicity I will be referring to the Kansa RBF method from now on).

Given set of points $\vec{\mathbf x} = \left\{x _i \right\}_{i=1}^{n}$ and a function $f$ we say that $D$ is the differentiation matrix (for Laplace operator) if
$D \,f\left(\vec{\mathbf x} \right) = \Delta f\left(\vec{\mathbf x} \right) $.

The differentiation matrix is constructed as $D = B\cdot A^{-1}$, where $B$ is metric matrix for RBF expansion of $\Delta f$, and $A$ is metric matrix for RBF expansion of $f$. Both $A$ and $B$ are symmetric, so $D$ should also be symmetric.
However, complex eigenvalues of real matrix mean that $D$ is not symmetric, despite of the fact that $D$ is composed of all symmetric matrices.

From what I understand, the symmetry could only be violated when computing the inverse of metric matrix $A$, especially if $A$ is an ill conditioned matrix. My question is:

What are the methods for “symmetrizing” differentiation metric matrices?

Alternatively,

How to get rid of asymmetry occurring when computing numerically inverse of a symmetric matrix and caused by machine round-off error?


EDIT:
Below is an example of how to construct differentiation matrix $D$ for Laplace operator.


Given: Domain $\Omega$ in Euclidean space, set of points $\vec{\mathbf x} = \left\{x _i \right\}_{i=1}^{n}$.

Goal: Find numerically eigenvalues of Laplace operator $\Delta = \nabla ^2$ defined on $\Omega$.

Solution consists of three steps:

  1. Construct differentiation matrix $D$ which will act as Laplace operator for any function estimated at a given (finite) set of points $\vec{\mathbf x}$.
  2. Compute eigenvalues of $D$.
  3. Compare eigenvalues of $D$ with eigenvalues of the Laplacian $\Delta$, thus estimating the error.

Note that the eigenvalues of $\Delta$ are positive, therefore good differentiation matrix $D$ will also have all eigenvalues positive, therefore $D$ has to be $symmetric$ in order to accurately imitate action of operator $\Delta$ on functions estimated at $\vec{\mathbf x}$.

Thus I need to find a way to construct differentiation matrix $D$ and to ensure that it is symmetric.


Consider a function $\phi_i = \phi_i(x) = \phi(\| x_i – x\|)$ referred as Radial Basis Function (or RBF) centered at $x_i\in \vec{\mathbf x}$. The set $\{\phi_i\}_{i=1}^{n}$ of RBFs centered at $\vec{\mathbf x}$ can be used to approximate any function $f(x)$:
$$f(x) \approx \sum_{i=1}^{n} \alpha_i \phi_i(x) = \sum_{i=1}^{n} \alpha_i \phi(\| x_i – x\|) \label{1} \tag{1}$$
Denote vector of coefficients $\vec{\boldsymbol{\alpha}} = \{ \alpha_i\}_{i=1}^{n}$ and vector of values of $f$ at $\vec{\mathbf x}$ as $\vec{\mathbf{f}} = f\left(\vec{\mathbf x}\right)$ . Then we can write the RBF expansion of $f$ in the matrix form:
$$\label{2}\tag{2}
\begin{bmatrix}f\left( x_1 \right)\\ f\left( x_1 \right)\\ \vdots \\ f\left( x_n \right) \end{bmatrix}
=
\begin{bmatrix}
\phi_{1} \left(x_{1}\right) & \phi_{1} \left( x_2 \right) & \cdots & \phi_{1} \left(x_n \right) \\
\phi_{2} \left(x_{1}\right) & \phi_{2} \left( x_2 \right) & \cdots & \phi_{2} \left(x_n \right) \\
\vdots & \vdots & \ddots & \vdots \\
\phi_{n} \left(x_{1}\right) & \phi_{n} \left( x_2 \right) & \cdots & \phi_{n} \left(x_n \right)
\end{bmatrix}
\begin{bmatrix} \alpha_1 \\ \alpha_2 \\ \vdots \\ \alpha_n\end{bmatrix}
\iff
\vec{\mathbf{f}} = \Phi \, \vec{\boldsymbol{\alpha}}
\implies
\vec{\boldsymbol{\alpha}} = \Phi^{-1}\vec{\mathbf{f}}, $$
where $\Phi_{ij} = \phi_i(x_j) = \phi\left(\left\|x_i – x_j \right\| \right)$.

Applying Laplace operator to the RBF expansion $\eqref{1}$ of a function $f$, we get
$
\Delta \vec{\mathbf{f}} = \Delta\Phi\, \vec{\boldsymbol{\alpha}},
$
where $\Delta\Phi_{ij} := \Delta\phi_i(x_j) = \Delta \phi\left(\left\|x_i – x_j \right\| \right) $. Substituting $\vec{\boldsymbol{\alpha}}$ from $\eqref{2}$, we get
$$
\Delta \vec{\mathbf{f}} = \Delta \Phi\cdot \Phi^{-1} \cdot \vec{\mathbf{f}}.
$$
Denoting $A = \Phi$ and $B = \Delta \Phi$, we write out the differentiation matrix :
$$ D = \Delta \Phi\cdot\Phi^{-1} = B \cdot A^{-1} \implies D \vec{\mathbf{f}} = \Delta \vec{\mathbf{f}}
$$
for any function $f$ estimated at points $\vec{\mathbf{x}}$.

Solutions Collecting From Web of "Symmetry Of Differentiation Matrix"

If $A$ and $B$ are symmetric that does not mean that $B A^{-1}$ is also symmetric
$$(B A^{-1})^\top = A^{-\top}B^\top = A^{-1} B.$$
The last matrix is equal to $B A^{-1}$ only if $A$ and $B$ commutate (which is the same as they have the same set of eigenvectors).

A symmetrical combination might be $B^{1/2} A^{-1} B^{1/2}$ or $A^{-1/2}BA^{-1/2}$, but I am not sure it would work as a $\Delta$ approximation. Here $B^{1/2}$ is a symmetrical square root given by
$$
B = Q \operatorname{diag}(\lambda_1, \lambda_2, \dots, \lambda_n) Q^\top\\
B^{1/2} = Q\operatorname{diag}(\sqrt\lambda_1, \sqrt\lambda_2, \dots, \sqrt\lambda_n) Q^\top
$$

For the second question – many implementations, for example, LAPACK, have routines for symmetric matrices that accept only a half of it (upper or lower). Thus, no roundoff can make the matrix asymmetric since the routines never access the other half of the matrix. For example, eigenvalue solvers for tridiagonal matrices simply don’t have an option to return complex-valued eigenvalues.

Thank you for a great question. One problem is scattered data interpolation, another is the approximation of derivatives, often solved using RBF and finite differences, correspondingly.

I think it’s too optimistic to expect to still have a well-posed RBF interpolation problem using derivatives of the basis function. The class of RBF kernels that can be successfully used in interpolation of arbitrary data is limited and not closed under differentiation. Although their derivatives are seen in a Hermite RBF interpolation over samples of function and its derivatives; the way you hope to use them to approximate Laplacian using function point evaluation information only is not guaranteed to work well. The lack of symmetry might be the least of your problems.

I’d suggest combining the RBF interpolation with a finite-differences, Discrete Laplacian matrix. After all, differentiation is a local operation whereas RBF is meant to take into account multi-scale trends in data. The combined matrix would be symmetric then, I think.

I now see that I was explaining the wrong thing, we just got to what you already have, just computed with a bit more stable algorithm. Indeed, in the nonorthogonal case, you get the matrix $A$ as well. That’s not a problem, you should never look at it in the form $DA^{-1}$. It’s what we call a generalized eigenvalue problem, and has to be written in the form

$$(D-\lambda A)\vec{v}=0$$
(notice that $A$ replaces identity). That’s logically sound because in this context, $I$ is replaced by the metric. This kind of problems is routinely solved either for specific algorithms for generalized eigenvalues, or, by using Cholesky decomposition of the matrix $A$ (or, a bit less efficient, using eigenvalue decomposition of $B$, turning things around and finishing with eigenvalue decomposition of $D$).

This comes precisely from your orthogonality condition – because orthogonality is defined through $A$ instead of identity (a different inner product), the eigenvectors won’t be orthogonal in the tradicional sense. That’s what the Cholesky/A-Eigenvalue decomposition does: orthogonalizes the vectors in your metric. So,

$$A=LL^T$$
$$(L^{-1} D{L^{T}}^{-1} -\lambda I ) (L^{T} \vec{v})=0$$

Sorry for the confusion before.