Articles of law of large numbers

Law of Large Numbers, a confusion

According to Law of Large Numbers, if I throw a coin 1000 times approximately 500 will be head and 500 tail. Suppose that I throw the coin 700 times and I got 700 heads. Can I say that in the next 300 throws the probability of getting tails will be higher than probability of getting […]

Law of Large Numbers – utility/difficulty of various versions.

This may or may not be an answer to Is there an easy proof that the set of $x \in [0,1]$ whose limit of proportion of 1's in binary expansion of $x$ does not exist has measure zero?, depending on how easy a proof has to be to count as easy. The law of large […]

Kolmogorov's sufficient and necessary condition for SLLN – What about pairwise uncorrelated RV?

Kolmogorov proved, that, as one considers independent (not necessary equally distributed) Random Variables: $\{X_n\}_{n\ge0}\subseteq \mathcal L^2$ With $\mathrm{Var} (X_n)=\sigma^2_n$ and without loss of generality $E[X_n]=0$. If $\sum_{n=0}^\infty \frac{\sigma^2_n}{n^2} \lt \infty$ then SLLN holds, that is: $$\frac1n\sum_{k=0}^n X_k \rightarrow 0 \,\,\,\,\,\,\,\,\,\,\,\,\,\text{a.s.}$$ Something different: Also it is known, that if $\sup_{n\ge0}\sigma^2_n =: v \lt \infty$ pairwise uncorrelated […]

Help understanding the difference between the LLNs and CLT?

(Sorry ahead of time, I don’t know LaTeX…) I’ve been studying some basic probability theory recently, and I’m having a little trouble understanding why the LLNs and CLT aren’t contradictory. Given IID variables {X1,X2,…} each with mean M and variance V, both LLNs seem to say that the average of the first N variables in […]

Distribution of sums of inverses of random variables uniformly distributed on

If I have $N$ random variables (denoted below as $X_i$) with uniform distribution on the $x$-axis $X_i = \rm{rand}[0,1]$ then the sum $$ S_N = \frac{1}{N}\sum_i^N\frac{1}{2X_i-1} $$ seems to be a Cauchy distribution with $\gamma = \pi/2$. I found this by trial and error. $$ f_{S_N}(x)=\frac2{4x^2 + \pi^2} $$ The plot of the distribution along […]

Weak/strong law of large numbers for dependent variables with bounded covariance

Let $(X_i)_{i\in\mathbb{N}}$ be a sequence of $L^2$ random variables with expected value $m$ for all $n$. Let $S_n=\sum_{i=1}^n X_i$ and $|\mathrm{Cov}(X_i,X_j)|\leq\epsilon_{|i-j|}$ for finite, non-negative constants $\epsilon_k$. Show that: (1) If $\lim_{n\to\infty} \epsilon_n=0$ then $S_n/n\to m$ in $L^2$ and probability (2) If $\sum_{k=1}^\infty \epsilon_k<\infty$, then $\mathrm{Var}(S_n/n)$ is of order $O(1/n)$ and $S_n/n\to m$ almost surely (1) […]

Why weak law of large number still alive?

I know the difference between WLLN and SLLN in terms of a convergence type. Then, as revealed in any statistical textbook saying sufficient conditions to two theorems are the same, I think that we do not need WLLN anymore. I also know that there is some example for which WLLN holds but SLLN not. Then, […]

If $(X_n)$ is i.i.d. and $ \frac1n\sum\limits_{k=1}^{n} {X_k}\to Y$ almost surely then $X_1$ is integrable (converse of SLLN)

Let $(\Omega,\mathcal F,P)$ be a finite measure space. Let $X_n:\Omega \rightarrow \mathbb R$ be a sequence of iid r.v’s I need to prove that if: $ n^{-1}\sum _{k=1}^{n} {X_k} $ converges almost surely to $Y$ then all $X_k$ have expectation. If I understand correctly then $X_k$ has expectations means $X_k$ is in $\mathcal L^1(\Omega)$. And […]

Sums of independent Poisson random variables

The question is the following : $X_n$ are independent Poisson random variables, with expectations $\lambda_n$, such that $\lambda_n$ sum to infinity. Then, if $S_n = \sum_{i=1}^n X_i$, I have to show $\frac{S_n}{ES_n} \to 1$ almost surely. How I started out on this problem, was to consider using the Borel Cantelli Lemma in a good way. […]

What happens if I toss a coin with decreasing probability to get a head?

Yesterday night, while I was trying to sleep, I found myself stuck with a simple statistics problem. Let’s imagine we have a “magical coin”, which is completely identical to a normal coin but for a thing: every time you toss the coin, the probability to get a head halves. So at t = 1 we […]