Intereting Posts

Requesting abstract algebra book recommendations
If every convergent subsequence converges to $a$, then so does the original bounded sequence (Abbott p 58 q2.5.4 and q2.5.3b)
Evaluate the integral $\int_{0}^{+\infty}\frac{\arctan \pi x-\arctan x}{x}dx$
Diagonalize the matrix A or explain why it can't be diagonalized
Meromorphic on unit disc with absolute value 1 on the circle is a rational function.
Showing that the space $C$ with the $L_1$ norm is incomplete
An application of the General Lebesgue Dominated convergence theorem
Infinite Sum with Combination
To show that either $R$ is a field or $R$ is a finite ring with prime number of elements and $ab = 0$ for all $a,b \in R$.
How is it that treating Leibniz notation as a fraction is fundamentally incorrect but at the same time useful?
Each element of a ring is either a unit or a nilpotent element iff the ring has a unique prime ideal
Orthonormal basis
How to determine the arc length of ellipse?
When do two functions become equal?
Finding the sum- $x+x^{2}+x^{4}+x^{8}+x^{16}\cdots$

An event with probability $p$ of being success is executed $\frac{1}{p}$ times. For example, if $p=5\%$, the event would then be executed $20$ times.

The Expected Value for the total number of trials needed to get one success is $\frac{1}{p}$. In this case, it’s $20$.

What I’m confused is, as p approaches zero, the chance of having a success in the first $\frac{1}{p}$ trials always approaches to $1-\frac{1}{e}$, or about $63\%$. This means: $P$(at least $1$ success in all $\frac{1}{p}$ trials) is about $63\%$.

- Definition: transient random walk
- Uniform distribution on unit disk
- Probabilistic proof of existence of an integer
- Probability of exactly one empty box when n balls are randomly placed in n boxes.
- Probability of cycle in random graph
- Broken stick probability problem

This $63\%$ is higher than $50\%$. It seems to suggest that, if I take all $\frac{1}{p}$ trials and consider them as one big event, and do this big event multiple times, I’d get more successes than failures. But on the other hand, since $\frac{1}{p}$ times is the EV mentioned earlier, shouldn’t the big event have an equal chance of being a success or a failure?

- Find the distribution of linear combination of independent random variables
- Payoff of a dice game
- Probability of Various Combinations of Weather
- Probability: 10th ball is blue
- Expected Value of Max of IID Variables
- How to prove inverse direction for correlation coefficient?
- Weak convergence of probability measure
- Simple Symmetric Random Walk
- Expected number of steps to transform a permutation to the identity
- Sum of random subsequence generated by coin tossing

Let’s take your example of an event that has a p=5% chance of success, and repeat it until it succeeds. Let’s call this one *experiment*, and let’s call the number of times you had to repeat the event in one experiment the number of *runs*. And let’s repeat the experiment many, many times.

What can we say about the **average** number of runs you have to do for one experiment? Well, that depends on what you mean by average:

- Say we want the
*mean*number of runs. This would be the weighted infinite sum $(.05*1) + (.95*.05*2) + (.95^2*.05*3) + … = 20$.**This is called the expected value**, or the “expected number of runs.” - Say we want the
*median*number of runs. This would be the number of runs at which you have exactly 50% chance of succeeding at or before that point. In other words, the number of runs $x$ for which $(1-.95^x)=0.5$, which is $x = \log_{0.95}{0.5} \approx 13.5$.

Thus, if you were to repeat this experiment many times, you’d find that you’d have to do an average (mean) of 20 trials before getting a success. However, in any given instance of the experiment, you’d find that you’d most likely finish by the 14th trial. The rare cases that you do way, way more are what bump up the expected value *(eg. approximately 0.6% of the time, you’ll need to do more than 100 runs before succeeding!)*.

The number of trials until the first success has expected value $1/p$, but the distribution is skewed: in particular, much of that $1/p$ may be contributed by cases where it takes many more than $1/p$ trials until the first success.

Maybe, rather than a case with $p$ very small, it might be easier to visualize

the case of $p = 1/2$. The first success comes on the first trial with probability $1/2$, on the second with probability $1/4$, etc. So the probability

of at least one success in the first two trials is $3/4$. The expected number of

trials until the first success is $1/2 + 2/4 + 3/8 + \ldots = 2$.

Consider the “big experiment” consisting in doing $\frac{1}{p}$ independent trials, and let $X$ be the $\{0,1\}$ random variable indicating whether there has been at least one success in total (and $E$ be the corresponding event). What you say is essentially that when $p\to 0^+$, $$\mathbb{E} X = \mathbb{P}\{X=1\} = \mathbb{P}E \xrightarrow[p\to0^+]{} 1-\frac{1}{e}\simeq 0.63 > \frac{1}{2}$$

The thing is that the expected **number** of successes $Y$ during the “big experiment” is $1$; but $X$ is set to $1$ whether $Y\geq 1$, and to $0$ only when it is exactly $0$. Put differently,

$$

\mathbb{E} X = \sum_{n=1}^{\frac{1}{p}}\mathbb{P}\{Y=n\}

$$

while

$$

1 = \mathbb{E} Y = \sum_{n=1}^{\frac{1}{p}} n \mathbb{P}\{Y=n\}

$$

There is no reason for $\mathbb{E} X$ to be $1/2$ given those two formulae — and even intuitively, it’s not because on average I have *one* success amongst $k$ trials that my probability of having *zero* successes in $k$ trials is $1/2$.

You appear to be making two intuitive mistakes:

- Confusing the mean (expected value) with the median (value with $1\over 2$ smaller and $1\over 2$ bigger)
- Creating an experiment which superficially looks like a geometric distribution, but isn’t.

The mean of a geometric series is, as you say, $1\over p$. The median, however is $\lceil{-1\over{\log_2(1-p)}}\rceil$.

As @clementc points out, the experiment you construct is not a geometric distribution. Using the c.d.f. Of the geometric distribution, your random variable is:

$$Y=\begin{cases}

1 &:1-(1-p)^{1\over p}\\

0 &:(1-p)^{1\over p}

\end{cases}$$

When expressed this way, it is clear that this is not a geometric distribution. Further, each of those probabilities is clearly not equal to $1\over 2$

Now, if you use the median rather than the mean then they are (except for the effect of the ceiling function).

- Methods to solve differential equations
- (ZF) If $\mathbb{R}^k$ is a countable union of closed sets, then at least one has a nonempty interior
- How different can equivalent categories be?
- Classifying groups of order 12.
- Are polynomials of the form : $ f_n= x^n+x^{n-1}+\cdots+x^{k+1}+ax^k+ax^{k-1}+\cdots+a$ irreducible over $\mathbb{Z} $?
- Is $\mathbb{Q}/\mathbb{Z}$ isomorphic to $\mathbb{Q}$?
- Minimal polynomial of $\omega:=\zeta_7+\overline{\zeta_7}$
- Prove the von Staudt-Clausen congruence of the Bernoulli numbers
- Finding an example for $\mathrm{Hom}\left ( \prod A_{j} ,B\right )\ncong \prod \mathrm{Hom}\left ( A_{j} ,B\right ) $
- Let $f:K\to K$ with $\|f(x)-f(y)\|\geq ||x-y||$ for all $x,y$. Show that equality holds and that $f$ is surjective.
- Is there a function having a limit at every point while being nowhere continuous?
- Value of $x=\sum\limits_{n=1}^\infty {\frac{1}{10^n-1}} $ and location of second digit $1$ of $x$
- How to find the radius of convergence?
- How do I start from a 10% discount and find the original price?
- About Lusin's condition (N)