Intereting Posts

Confusion regarding Russell's paradox
Getting a transformation matrix from a normal vector
Showing $\sum_0^n {n\choose k}(-1)^k \frac{1}{2k + 2} = 1/(2n + 2)$
If $f(z)=(g(z),h(z))$ is continuous then $g$ and $h$ are as well
Where to go after calculus?
Is the functor associating a bundle with a structure group to a principal bundle faithful?
Show : $(-1)^{n}n^{-\tan\left(\tfrac{\pi}{4}+\tfrac{1}{n} \right)}=\tfrac{(-1)^{n}}{n}+\mathcal{O}\left(\tfrac{\ln(n)}{n^{2}} \right)$
Z coordinates of 3rd point (vertex) of a right triangle given all data in 3D
Prove that $\Bbb{E}(|X-Y|) \le \Bbb{E}(|X+Y|)$ for i.i.d $X$ and $Y$
If $\displaystyle \lim _{x\to +\infty}y(x)\in \mathbb R$, then $\lim _{x\to +\infty}y'(x)=0$
Dirichlet's Divisor Problem
Solving $E=\frac{1}{\sin10^\circ}-\frac{\sqrt3}{\cos10^\circ}$
$E \subset \mathbb R$ is an Interval $\iff E$ Is connected
Proof that classifying spaces for discrete groups are the Eilenberg-MacLane spaces
Why do we use a Least Squares fit?

Let $U_{1}, \, … \, ,U_{n}$ be a random sample of uniform random variables $U_i \sim \mathrm{Uniform}(0,1)$. Let $U_{(1)}, \, … \, , U_{(n)}$ be the order statistics of the sample. The goal is to prove that:

$$

W = U_{(s)}-U_{(r)} \sim \textrm{Beta}(s-r, \, n – s + r +1) \qquad 1 \leq r < s \leq n

$$

**My “proof”**

- Compute probability of a particular ordering of normal random variables
- Probability distribution of sign changes in Brownian motion
- Intuition for probability density function as a Radon-Nikodym derivative
- Sum of Bernoulli random variables with different success probabilities
- Sums of independent Poisson random variables
- Probability Distribution of Rolling Multiple Dice

The joint pdf of $U_{(r)}, U_{(s)}$ is:

$$

f_{U_{(r)}, U_{(s)}} (u, v) = \frac{n!}{(r-1)!(s-1-r)!(n-s)!} u^{r-1} (v-u)^{s-1-r} (1 -v)^{n-s} \cdot \textbf{1}_{\{u < v\}}

$$

Consider the transformation:

$$

\begin{Bmatrix}

W = U_{(s)} – U{(r)} & U_{(r)} = Z \\

Z = U_{(r)} & U_{(s)} = W + Z \\

\end{Bmatrix}

$$

The absolute value of the Jacobian determinant is 1. Therefore, the joint pdf of the transformation is:

$$

f_{W,Z}(w,z) = \frac{n!}{(r-1)!(s-1-r)!(n-s)!} z^{r-1} w^{s-1-r} (1 – w – z)^{n-s}

$$

$$

f_W(w) = \int_{S} f_{W,Z}(w,z) \, \textrm{d}z

$$

For fixed $w$, we have $S = \{z \, : \, 0 \leq z \leq 1 – w\}$. Thus:

$$

f_W(w) = \frac{n!}{(r-1)!(s-1-r)!(n-s)!} w^{s-1-r} \int_{0}^{1-w} z^{r-1}(1-w-z)^{n-s} \, \textrm{d} z

$$

The problem is that I do not know how to evaluate the integral, but Wolfram Alpha does:

$$

\int_{0}^{1-w} z^{r-1}(1-w-z)^{n-s} \, \textrm{d} z= \frac{\Gamma(r)\Gamma(n-s+1)}{\Gamma(n+r-s+1)} (1- w)^{n+r-s}

$$

Now it is really easy to figure out the distribution of $W$. Indeed, the only thing that has to be done is to rewrite the factorials using gamma functions.

Initially I was going to ask how to evaluate the integral without the help of Wolfram Alpha, but then I realized that maybe there is a “better” proof that avoids it. I have tried to find an alternative proof by using multiple transformations which involved $W$ and another transformation, but it was useless.

**Edit**

I have corrected some mistakes I think I made:

1) The joint pdf I wrote was incorrect.

2) I wrote that I the pdf of $W$ could be found as a convolution, but $U_{(r)}, U_{(s)}$ are clearly dependent. What I actually did was an implicit change of variables.

- Proof of upper-tail inequality for standard normal distribution
- Conditional return time of simple random walk
- Is a $90\%$ confidence interval really $90\%$ confident?
- Joint distribution of dependent Bernoulli Random variables
- $P(x,n) = \frac{x(n-1)!}{n^{x}(n-x)! }$ — What is the name of this probability distribution?
- Derivation of chi-squared pdf with one degree of freedom from normal distribution pdf
- Existence of independent and identically distributed random variables.
- I roll 6-sided dice until the sum exceeds 50. What is the expected value of the final roll?
- Distribution of the digits of Pi
- Sufficiency of $X_{(n)}$ for random sample of scale uniform variables.

One does not need to evaluate the integral. The change of variables $u=(1-w)v$ yields that $f_W(w)$ is a constant factor $C$ times $w^{s-1-r}$ times $(1-w)^{n+r-s}$. This also yields the value of the constant factor $C$ since $f_W$ must integrate to $1$.

(Or, after the change of variables, one can recognize the integral over $v$ from $0$ to $1$ as a Beta.)

Consider this alternative proof:

One can generate $n$ uniform random variables on a line by the following equivalent way:

1. Generating $n+1$ uniform random variables on a circle.

2. Choosing a random point as a references point (‘$0$’ point).

Therefore $U(s)−U(r)$ has an identical distribution to $U(s-1)−U(r-1)$, and to $U(s-r)−U(0)=U(s-r)-0 = U(s-r)$.

We already know $U(s-r) \sim \text{Beta}(s−r,n−(s+r)+1)$

- Is this metric space incomplete?
- Asymptotics of a summation over real valued functions
- How to estimate the growth of a “savage” function near 1?
- Does the intersection of two finite index subgroups have finite index?
- Why left continuity does not hold in general for cumulative distribution functions?
- Smooth boundary condition implies exterior sphere condition
- Does $X\times S^1\cong Y\times S^1$ imply that $X\times\mathbb R\cong Y\times\mathbb R$?
- An identity of an Elliptical Integral
- How show that $|a_{n}-1|\le c\lambda ^n,\lambda\in (0,1)$
- Two questions about equivalence relations
- How does the sum of the series “$1 + 2 + 3 + 4 + 5 + 6\ldots$” to infinity = “$-1/12$”?
- Positive limit of sequence vs. positive terms
- Normal Operators: Transform
- Pointwise but not uniform convergence of a Fourier series
- Reference Book on Special Functions