Articles of regularization

Is it true that $f\in W^{-1,p}(\mathbb{R}^n)$, then $\Gamma\star f\in W^{1,p}(\mathbb{R}^n)$?

I am trying to understand the following paper. In page 1191, in the beggining of the proof of Theorem 2.9. the authors consider the convolution $$v=\Gamma\star f$$ They claim that $v\in W^{1.p}(\mathbb{R}^n)$. Does anyone knows why this is true with $f$ being only in $W^{-1,p}(\mathbb{R}^n)$? Update: Maybe this can thrown some light upon the matter. […]

regularization of sum $n \ln(n)$

I was testing out a few summation using my previous descriped methodes when i found an error in my reasoning. I’m really hoping someone could help me out. The function which i was evaluating was $\sum_{n=1}^{\infty} n\ln(n)$ which turns out to be $-\zeta'(-1)$. This made me hope i could confirm my previous summation methode for […]

The sum $\frac{1}{\ln(2)}+\frac{1}{\ln(3)}+\frac{1}{\ln(4)}+…$ is divergent. Find the regularized evaluation

By considering the integral Zeta function $$F(s)=s+\frac{1}{2^s\ln(2)}+\frac{1}{3^s\ln(3)}+\frac{1}{4^s\ln(4)}+…$$ Evaluate $$\frac{1}{\ln(2)}+\frac{1}{\ln(3)}+\frac{1}{\ln(4)}+…$$ EDIT: There has clearly been much confusion here. I am asking for the analytic continuation of the integral Zeta function at 0. I am asking for the sum of the series in the sense that $$1+2+3+…=-\frac{1}{12}$$

Approximation in Sobolev Spaces

Consider the following proof in Lawrence Evans book ‘Partial Differential Equations’: How does it follows that $v^{\epsilon} \in C^{\infty}(\bar{V})$? I could see how $v^{\epsilon} \in C^{\infty}(V)$ by using the translations, but I’m having difficulty seeing how it extends to $\bar{V}$, since it says that $u_{\epsilon}(x) := u(x^{\epsilon}) \text{ for } x \text{ in } V$, […]

Regulators and uniqueness

Does the regularization of a divergent infinite sum yield a unique value? I.e. do different regularization schemes acting on the same infinite sum produce the same exact value independent of the regulator? What, exactly, do these values mean? Or what are they? My understanding is that they are not “convergent” values. (Sorry in advance for […]

Missing term in series expansion

I asked a similar question before, but now I can formulate it more concretely. I am trying to perform an expansion of the function $$f(x) = \sum_{n=1}^{\infty} \frac{K_2(nx)}{n^2 x^2},$$ for $x \ll 1$. Here, $K_2(x)$ is the modified Bessel function of the second kind. This series is a result of solving the integral $$f(x) = […]

Reference request: Introduction to mathematical theory of Regularization

I asked the question “Are there books on Regularization at an Introductory level?” at physics.SE. I was informed that “there is (…) a mathematical theory of regularization (Cesàro, Borel, Ramanujan summations and many others) that is interesting per se”. Question: Can someone advise me on how to study one or more of the above topics […]

Why are additional constraint and penalty term equivalent in ridge regression?

Tikhonov regularization (or ridge regression) adds a constraint that $\|\beta\|^2$, the $L^2$-norm of the parameter vector, is not greater than a given value (say $c$). Equivalently, it may solve an unconstrained minimization of the least-squares penalty with $\alpha\|\beta\|^2$ added, where $\alpha$ is a constant (this is the Lagrangian form of the constrained problem). The above […]

How to find $A$ from $Ax=b$?

I am aware of the inverse problem of the form $Ax=b$ where matrix $A$ and vector $b$ are known and we need to estimate the vector $x$. Is there any formal methods to find matrix $A$ given $b$ and $x$? I can understand how ill-posed it may be, but is there any studies about it? […]

Does $(\mathbf A+\epsilon \mathbf I)^{-1}$ always exist? Why?

Does $(\mathbf A+\epsilon \mathbf I)^{-1}$ always exist, given that $\mathbf A$ is a square and positive (and possibly singular) matrix and $\epsilon$ is a small positive number? I want to use this to regularize a sample covariance matrix ($\mathbf A = \Sigma$) in practice, so that I can compute the inverse, which I need to […]