Intereting Posts

Texture mapping from a camera image (knowing the camera pose)
Galois group of $\mathbb Q(\sqrt{4+\sqrt 7})/\mathbb Q$
Finding a bound on the maximum of the absolute value of normal random variables
A triangle-free, 6-chromatic graph with 44 vertices
How can I solve this?
What is $dx$ in integration?
Showing inequality for harmonic series.
Homology groups of lens space
Co-equalizers in Set
How I could define a inner product in the characters in $SL(2, \mathbb R)$
Resolvent: Definition
An example showing that van der Waerden's theorem is not true for infinite arithmetic progressions
What does it mean that we can diagonalize the metric tensor
The group of $k$-automorphisms of $k]$, $k$ is a field
Can an observed event in fact be of zero probability?

I have a problem computing numerically the eigenvalues of Laplace-Beltrami operator. I use meshfree Radial Basis Functions (RBF) approach to construct differentiation matrix $D$. Testing my code on simple manifolds (e.g. on a unit sphere) shows that computed eigenvalues of $D$ often have non-zero imaginary parts, even though they should all be real.

(For simplicity I will be referring to the Kansa RBF method from now on).

Given set of points $\vec{\mathbf x} = \left\{x _i \right\}_{i=1}^{n}$ and a function $f$ we say that $D$ is the *differentiation matrix* (for Laplace operator) if

$D \,f\left(\vec{\mathbf x} \right) = \Delta f\left(\vec{\mathbf x} \right) $.

- How can I find the value of $\ln( |x|)$ without using the calculator?
- A dilogarithm identity (simplification/compaction)
- How to best approximate higher-degree polynomial in space of lower-degree polynomials?
- Maximum of Polynomials in the Unit Circle
- Is there an integral for $\pi^4-\frac{2143}{22}$?
- How to approximate this series?

The differentiation matrix is constructed as $D = B\cdot A^{-1}$, where $B$ is metric matrix for RBF expansion of $\Delta f$, and $A$ is metric matrix for RBF expansion of $f$. Both $A$ and $B$ are symmetric, so $D$ should also be symmetric.

However, complex eigenvalues of real matrix mean that $D$ is not symmetric, despite of the fact that $D$ is composed of all symmetric matrices.

From what I understand, the symmetry could only be violated when computing the inverse of metric matrix $A$, especially if $A$ is an ill conditioned matrix. My question is:

What are the methods for “symmetrizing” differentiation metric matrices?

Alternatively,

How to get rid of asymmetry occurring when computing numerically inverse of a symmetric matrix and caused by machine round-off error?

**EDIT:**

Below is an example of how to construct differentiation matrix $D$ for Laplace operator.

**Given:** Domain $\Omega$ in Euclidean space, set of points $\vec{\mathbf x} = \left\{x _i \right\}_{i=1}^{n}$.

**Goal:** Find numerically eigenvalues of Laplace operator $\Delta = \nabla ^2$ defined on $\Omega$.

**Solution** consists of three steps:

- Construct differentiation matrix $D$ which will act as Laplace operator for any function estimated at a given (finite) set of points $\vec{\mathbf x}$.
- Compute eigenvalues of $D$.
- Compare eigenvalues of $D$ with eigenvalues of the Laplacian $\Delta$, thus estimating the error.

**Note** that the eigenvalues of $\Delta$ are **positive**, therefore good differentiation matrix $D$ will also have all eigenvalues *positive*, therefore $D$ has to be $symmetric$ in order to accurately imitate action of operator $\Delta$ on functions estimated at $\vec{\mathbf x}$.

Thus I need to find a way to *construct* differentiation matrix $D$ and to ensure that it is *symmetric*.

Consider a function $\phi_i = \phi_i(x) = \phi(\| x_i – x\|)$ referred as *Radial Basis Function* (or *RBF*) centered at $x_i\in \vec{\mathbf x}$. The set $\{\phi_i\}_{i=1}^{n}$ of RBFs centered at $\vec{\mathbf x}$ can be used to approximate any function $f(x)$:

$$f(x) \approx \sum_{i=1}^{n} \alpha_i \phi_i(x) = \sum_{i=1}^{n} \alpha_i \phi(\| x_i – x\|) \label{1} \tag{1}$$

Denote vector of coefficients $\vec{\boldsymbol{\alpha}} = \{ \alpha_i\}_{i=1}^{n}$ and vector of values of $f$ at $\vec{\mathbf x}$ as $\vec{\mathbf{f}} = f\left(\vec{\mathbf x}\right)$ . Then we can write the RBF expansion of $f$ in the *matrix form*:

$$\label{2}\tag{2}

\begin{bmatrix}f\left( x_1 \right)\\ f\left( x_1 \right)\\ \vdots \\ f\left( x_n \right) \end{bmatrix}

=

\begin{bmatrix}

\phi_{1} \left(x_{1}\right) & \phi_{1} \left( x_2 \right) & \cdots & \phi_{1} \left(x_n \right) \\

\phi_{2} \left(x_{1}\right) & \phi_{2} \left( x_2 \right) & \cdots & \phi_{2} \left(x_n \right) \\

\vdots & \vdots & \ddots & \vdots \\

\phi_{n} \left(x_{1}\right) & \phi_{n} \left( x_2 \right) & \cdots & \phi_{n} \left(x_n \right)

\end{bmatrix}

\begin{bmatrix} \alpha_1 \\ \alpha_2 \\ \vdots \\ \alpha_n\end{bmatrix}

\iff

\vec{\mathbf{f}} = \Phi \, \vec{\boldsymbol{\alpha}}

\implies

\vec{\boldsymbol{\alpha}} = \Phi^{-1}\vec{\mathbf{f}}, $$

where $\Phi_{ij} = \phi_i(x_j) = \phi\left(\left\|x_i – x_j \right\| \right)$.

Applying Laplace operator to the RBF expansion $\eqref{1}$ of a function $f$, we get

$

\Delta \vec{\mathbf{f}} = \Delta\Phi\, \vec{\boldsymbol{\alpha}},

$

where $\Delta\Phi_{ij} := \Delta\phi_i(x_j) = \Delta \phi\left(\left\|x_i – x_j \right\| \right) $. Substituting $\vec{\boldsymbol{\alpha}}$ from $\eqref{2}$, we get

$$

\Delta \vec{\mathbf{f}} = \Delta \Phi\cdot \Phi^{-1} \cdot \vec{\mathbf{f}}.

$$

Denoting $A = \Phi$ and $B = \Delta \Phi$, we write out the **differentiation matrix** :

$$ D = \Delta \Phi\cdot\Phi^{-1} = B \cdot A^{-1} \implies D \vec{\mathbf{f}} = \Delta \vec{\mathbf{f}}

$$

for **any** function $f$ estimated at points $\vec{\mathbf{x}}$.

- Is there a cubic spline interpolation with minimal curvature?
- Approximation symbol: Is π ≈ 3.14.. equal to π ≒ 3.14..?
- Why is $e$ close to $H_8$, closer to $H_8\left(1+\frac{1}{80^2}\right)$ and even closer to $\gamma+log\left(\frac{17}{2}\right) +\frac{1}{10^3}$?
- Modern formula for calculating Riemann Zeta Function
- Numerical Approximation of Differential Equations with Midpoint Method
- Motivation for Ramanujan's mysterious $\pi$ formula
- Sine Approximation of Bhaskara
- Prove that $2^{2^{\sqrt3}}>10$
- Evaluating a limit involving binomial coefficients.
- Determine the coefficients of an unknown black-box polynomial

If $A$ and $B$ are symmetric that does not mean that $B A^{-1}$ is also symmetric

$$(B A^{-1})^\top = A^{-\top}B^\top = A^{-1} B.$$

The last matrix is equal to $B A^{-1}$ only if $A$ and $B$ commutate (which is the same as they have the same set of eigenvectors).

A symmetrical combination might be $B^{1/2} A^{-1} B^{1/2}$ or $A^{-1/2}BA^{-1/2}$, but I am not sure it would work as a $\Delta$ approximation. Here $B^{1/2}$ is a symmetrical square root given by

$$

B = Q \operatorname{diag}(\lambda_1, \lambda_2, \dots, \lambda_n) Q^\top\\

B^{1/2} = Q\operatorname{diag}(\sqrt\lambda_1, \sqrt\lambda_2, \dots, \sqrt\lambda_n) Q^\top

$$

For the second question – many implementations, for example, LAPACK, have routines for symmetric matrices that accept only a half of it (upper or lower). Thus, no roundoff can make the matrix asymmetric since the routines never access the other half of the matrix. For example, eigenvalue solvers for tridiagonal matrices simply don’t have an option to return complex-valued eigenvalues.

Thank you for a great question. One problem is scattered data interpolation, another is the approximation of derivatives, often solved using RBF and finite differences, *correspondingly*.

I think it’s too optimistic to expect to still have a well-posed RBF interpolation problem using *derivatives* of the basis function. The class of RBF kernels that can be successfully used in interpolation of arbitrary data is limited and not closed under differentiation. Although their derivatives are seen in a *Hermite* RBF interpolation over samples of function *and* its derivatives; the way you hope to use them to approximate Laplacian using function point evaluation information only is not guaranteed to work well. The lack of symmetry might be the least of your problems.

I’d suggest combining the RBF interpolation with a finite-differences, Discrete Laplacian matrix. After all, differentiation is a local operation whereas RBF is meant to take into account multi-scale trends in data. The combined matrix would be symmetric then, I think.

I now see that I was explaining the wrong thing, we just got to what you already have, just computed with a bit more stable algorithm. Indeed, in the nonorthogonal case, you get the matrix $A$ as well. That’s not a problem, you should never look at it in the form $DA^{-1}$. It’s what we call a generalized eigenvalue problem, and has to be written in the form

$$(D-\lambda A)\vec{v}=0$$

(notice that $A$ replaces identity). That’s logically sound because in this context, $I$ is replaced by the metric. This kind of problems is routinely solved either for specific algorithms for generalized eigenvalues, or, by using Cholesky decomposition of the matrix $A$ (or, a bit less efficient, using eigenvalue decomposition of $B$, turning things around and finishing with eigenvalue decomposition of $D$).

This comes precisely from your orthogonality condition – because orthogonality is defined through $A$ instead of identity (a different inner product), the eigenvectors won’t be orthogonal in the tradicional sense. That’s what the Cholesky/A-Eigenvalue decomposition does: orthogonalizes the vectors in your metric. So,

$$A=LL^T$$

$$(L^{-1} D{L^{T}}^{-1} -\lambda I ) (L^{T} \vec{v})=0$$

Sorry for the confusion before.

- Representation of $e$ as a descending series
- Abelian category without enough injectives
- How to prove that exponential grows faster than polynomial?
- On finding polynomials that approximate a function and its derivative (extensions of Stone-Weierstrass?)
- Limit of $\lim_{t \to \infty} \frac{ \int_0^\infty \cos(x t) e^{-x^k}dx}{\int_0^\infty \cos(x t) e^{-x^p}dx}$
- Can the square root of a real number be negative?
- Primes $p$ such that $2$ is a primitive root modulo $p$ , where $p=4\cdot k^2+1$?
- Intuitive explanation of entropy?
- Using $\epsilon$-$\delta$ definition to prove that $\lim_{x\to-2}\frac{x-1}{x+1}=3$.
- $G$ is a group, prove: if $<∞$, then $$ is finite.
- Legendre Polynomial Orthogonality and Size
- angular velocity around ellipse
- Quadratic Equation Based Problem:Prove either $a = 2l$ & $b = m$ or $b + m = al$
- Let $z \in C^*$ such that $|z^3+\frac{1}{z^3}|\leq 2$ Prove that $|z+\frac{1}{z}|\leq 2$
- “Abstract nonsense” proof of the splitting lemma