Articles of measure theory

Independence of two limits

Let $(X_n)$ and $(Y_n)$ be two sequence of random variables. $(X_n)$ and $(Y_n)$ are independent to each other. If $(X_n)$ and $(Y_n)$ have limits in distribution. $(X_n)$ tends to $X$, and $(Y_n)$ tends to $Y$ in distribution. Intuitively $X$ and $Y$ are independent, is this true? How can I prove it? Thank you very much.

The Lebesgue differentiation Theorem for Radon measures

Well, I am looking for references of the Lebesgue differentiation theorem generalization for Radon measures. I want also know about results that give us information of uniformity in the limit of the Traditional theorem. I heard about the Besicovitch’s Lemmas but i didn’t see them yet.

A probability function is determined on a dense set- Where is density used in the following proof?

A probability function is determined on a dense set- Where is density used in the following proof? Consider the following theorem and proof from Resnick’s book A probability path. I cannot really see where the assumption that $D $ is dense in $R $ is used. Can you enlighten me? Is it needed for $(8.2) […]

Is Lebesgue measure of the boundary of a bounded Lipschitz domain in $\mathbb R^n$ zero?

Is Lebesgue measure of the boundary of a bounded Lipschitz domain in $\mathbb R^n$ zero? I guess the answer is yes but I can’t find a reference for that. Could someone give me a reference for the answer?

Finite content which is not a pre-measure

I’ve just run into an apparent contradiction and it would be great if someone could explain where I’m going wrong: A basic theorem in measure theory states that for a finite content $\mu$ on a ring of subsets (of some set $X$) one has $\mu$ is a pre-measure if and only if for every sequence […]

Pointwise limit of convolution

Suppose $\omega$ is the standard mollifier in $\mathbb R$. Then, let $\omega_{\epsilon} (x):= \frac{1}{\epsilon} \omega \left(\frac{x}{\epsilon}\right)$. For $0 < t_{1} < t_{2}$ the following result is to be proven: $$ \lim_{\epsilon \to 0} \int_{0}^{t} (\omega_{\epsilon}(s-t_1) – \omega_{\epsilon}(s-t_2))\,ds = \chi_{[t_1,t_2]}, $$ where $\chi_{[t_1,t_2]}$ is the characteristic function on the interval $[t_1, t_2]$. I couldn’t prove the […]

Haar Measure on Locally Compact monoids

I have been reading on Haar measure and we know that every locally compact Hausdorff group admits a Haar measure, is the same true for semigroups with identity $e$(monoid)? If not, is there a class of semigroups that admits a Haar measure? Any help will be appreciated.

The Dirac delta does not belong in L2

I need to prove that Dirac’s delta does not belong in $L^2(\mathbb{R})$. First, I found the next definition of Dirac’s delta $\delta :D(\mathbb R)\to \mathbb R$ is defined by: $<δ,ϕ>=\int_{-\infty}^{+\infty}\varphi(x)\delta(x)dx = ϕ(0)$, and $δ(x)= \begin{cases} 1,& x= 0\\ 0 ,& x\ne 0, \end{cases} \\$. The space $L^2(\mathbb{R})=\{f:f \text{ is measurable and } ||f||_{2}<+\infty \}$. I’m […]

Will adding a constant to a random variable change its distribution?

Suppose I have a random variable $X$ and to this, I add a constant $c>0$. Will $X+c$ have a different distribution? From my intuition it seems so, but I am unable to prove it from a measure-theoretic point of view. Does the proof require measure theory? Thanks!

Measures induced by integral and by measurable mapping

Suppose $(X, \mathcal{F}, u)$ is a measure space. $f: X \rightarrow \mathbb{R}$ is a measurable mapping. Then $f$ induces a measure $v_f$ on $(X, \mathcal{F})$ as $v_f(A):=\int_A f du$, $g: X \rightarrow X$ is a measurable mapping. Then $g$ induces a measure $w_g$ on $(X, \mathcal{F})$ as $w_g(A):=u(g^{-1}(A))$. My question are: Are there some relations […]