Intereting Posts

Why are these maps of $G$-modules?
induced representation, dihedral group
Find all polynomials $P$ such that $P(x^2+1)=P(x)^2+1$
Determining the rank of a matrix based on its minors
$f$ not differentiable at $(0,0)$ but all directional derivatives exist
Do the axioms of set theory actually define the notion of a set?
Project Euler – 34 / Find a mathematical approach for upper bound
What is the real meaning of Hilbert's axiom of completeness
Difference between “space” and “mathematical structure”?
Is the $\sum\sin(n)/n$ convergent or divergent?
Prove that there exists a Borel measurable function $h: \mathbb R \to \mathbb R$ such that $g= h\circ f$.
Likelihood Functon.
Definite Dilogarithm integral $\int^1_0 \frac{\operatorname{Li}_2^2(x)}{x}\, dx $
Proof that $t^8+2t^6+4t^4+t^2+1$ is reducible in $\mathbb{F}_p$
minimal polynomial of power of algebraic number

I have a random walk process where each step the probability of $+1$ is $p$ and $-1$ is $q$, with $p+q=1$. $p$ may not equal $q$. The walker starts at zero. I want to know the probability that the first return to zero occurs at $t$. Obviously, the number of increases will equal the number of decreases and both will equal $t/2$. So obviously $t$ must be even.

So far, all I have seen is the unbiased random walk PDF for first return time and that when $p>q$ there is a probability that the walker may ever reach zero. It seems like that for any finite $t$, it should simply be a question of getting the right number of permutations. Or perhaps it is as easy as a rewrite of the unbiased PDF?

Additionally, if possible, I’d like to condition on the first step being to $1$.

- First exit time for Brownian motion without drift
- Probability of sampling with and without replacement
- How to show that these random variables are pairwise independent?
- Why $2\frac{X_1}{X_1+X_2}-1$ and $X_1+X_2$ are independent if $X_1$ and $X_2$ are i.i.d. exponential?
- Probability Distribution of Rolling Multiple Dice
- Expectation and variance of the geometric distribution

Thank you!

- If $E_P(X) \in $ span($E_{P_i}(X)$), $1 \leq i \leq n$, for all integrable $X$, then is $P$ a convex combination of the $P_i$?
- What is the pmf of rolling a die until obtaining three consecutive $6$s?
- Combinations of characteristic functions: $\alpha\phi_1+(1-\alpha)\phi_2$
- Limit using Poisson distribution
- Distribution of Difference of Chi-squared Variables
- Why do we consider Borel sets instead of measurable sets?
- Solution to the stochastic differential equation
- Limit of Multivariate Probability Density Function as one or more or all variables approach positive or negative infinity
- Asymmetric Random Walk / Prove $E = \frac{b}{p-q}$ / How do I use hint?
- Probability Distribution of Rolling Multiple Dice

You are correct. It boils down to calculating the correct number of “permutations” or walks. I will leave you to figure out how to get the exact probability; here I only do the combinatorial problem.

We want to count the number of walks of length $t$ that start and end at $0$, such that the walk never visits $0$ in between. For convenience, we will restrict the first step to be $+1$; you will need to double the answer we get.

We can naturally represent any walk as a *word* over the alphabet $\{ +, – \}$, where the sign tells us whether the step is in the positive or negative direction.

In this notation, we are counting the number of words such that

- the length of the word is $t$,
- the starting symbol is $+$ (why?),
- the number of ‘$+$’ and ‘$-$’ are equal (why?),
- for any proper prefix of the word, the number of $+$ strictly exceeds the number of ‘$-$’ (why?).

Clearly, the last symbol must be a ‘$-$’. Let us delete the starting ‘$+$’ and ending ‘$-$’ symbol of the word, to get a subword of length $t-2$. Obviously, the original word is in one-to-one correspondence with the subword.

The final idea here is to recognize that the subwords themselves form the so-called Dyck words of length $t-2$. (A Dyck word is one with equal number of $+$ and $-$ such that every prefix of the word has at least as many ‘$+$’ as ‘$-$’.) It is a standard result that the number of Dyck words of length $t-2$ is equal to the Catalan number

$$

C := \frac{1}{(t/2)}\binom{t-2}{\frac{t}{2}-1}.

$$

Can you take it from here?

**Added.** Let $E$ be the event that the first return time is $t$. If you want to calculate the probability of $E$ conditioned on the first step being $+1$, you can use the usual definition:

$$

\frac{\Pr[ E \wedge \text{ first step is $+$} ]}{\text{first step is $+$}} =

\frac{C \cdot (pq)^{t/2}}{p}.

$$

A standard approach for a probabilist here is to rely on generating functions. That is, fix $|s|<1$ and consider $u_x=\mathrm E_x(s^T)$, where the subscript $x$ means that the random walk starts from $x$ and where $T$ denotes the first return to $0$ of the random walk, that is, $T=\inf\{n\ge1\mid X_n=0\}$ and $(X_n)_n$ denotes the random walk starting from $X_0=x$ and making steps $+1$ with probability $p$ and $-1$ with probability $q=1-p$.

What you are asking amounts to computing $u_0$ and $u_1$, we shall explain how to compute $u_x$ for every integer $x$.

Taking into account the first step of the walk, one sees that $u_0=s(pu_1+qu_{-1})$, $u_1=s(pu_2+q)$ and $u_{-1}=s(p+qu_{-2})$. Furthermore, starting from $x=2$, to reach $0$ one must first hit $1$ and then starting from $1$ one must hit $0$. The times to hit $1$ starting from $2$ and to hit $0$ starting from $1$ to are i.i.d. hence $u_2=u_1^2$. Likewise, $u_{-2}=u_{-1}^2$ (and in fact $u_x=u_1^x$ and $u_{-x}=u_{-1}^x$ for every positive $x$).

This shows that $u_0=s(pu_1+qu_{-1})$ where $u_1=s(pu_1^2+q)$ and $u_{-1}=s(p+qu_{-1}^2)$. Solving this for $u_1$ and $u_{-1}$ yields

$u_1=\dfrac{1\pm\sqrt{1-4pqs^2}}{2sp}$ and a similar formula for $u_{-1}$ but since one wants that $u_1$ and $u_{-1}$ stay bounded when $s\to0$, the $\pm$ signs are in fact $-$ signs and

$$

u_0=1-\sqrt{1-4pqs^2},\qquad

u_1=\frac{u_0}{2sp},\qquad

u_{-1}=\frac{u_0}{2sq}.

$$

The PDF of $T$ starting from $0$, $1$ and $-1$ is fully encoded in $u_0$, $u_1$ and $u_{-1}$ respectively. For example, the probability to come back at $0$ starting from $0$ and to hit $0$ starting from $1$ and $-1$ respectively, are the limits of $u_0$, $u_1$ and $u_{-1}$ when $s\to1$, that is,

$$

\mathrm P_0(T<\infty)=1-\sqrt{1-4pq}=1-|1-2p|,

$$

and

$$

\mathrm P_1(T<\infty)=\frac1{2p}\mathrm P_0(T<\infty),\quad

\mathrm P_{-1}(T<\infty)=\frac1{2q}\mathrm P_0(T<\infty).

$$

Let us assume from now on that $p\ge\frac12$ (hence $p\ge q$). Then,

$$

\mathrm P_0(T<\infty)=2q,\quad \mathrm P_1(T<\infty)=q/p,\quad\mathrm P_{-1}(T<\infty)=1.

$$

Likewise, to get the full PDF only requires to know the expansion along powers of $t$ of the square root involved, namely,

$$

\sqrt{1-4t}=1-\sum\limits_{n=1}^{+\infty}c_nt^n,\quad c_n=\frac{(2n)!}{(2n-1)(n!)^2}.

$$

One gets for example

$$

u_1=\frac1{2ps}\sum\limits_{n=1}^{+\infty}c_n(pq)^ns^{2n},

$$

hence, for every $n\ge1$,

$$

\mathrm P_1(T=2n-1)=\frac1{2p}c_n(pq)^n.

$$

Likewise, for every $n\ge1$,

$$

\mathrm P_{-1}(T=2n-1)=\frac1{2q}c_n(pq)^n,$$

and finally,

$$

\mathrm P_0(T=2n)=c_n(pq)^n.

$$

- Full algorithm for rational functions integration
- Why is a Harmonic distribution a function?
- Prove that 10101…10101 is NOT a prime.
- An example of noncommutative division algebra over $Q$ other than quaternion algebras
- normal bundle of level set
- Can a continuous function from the reals to the reals assume each value an even number of times?
- Computing $x^{2017}+\frac1{x^{2017}}$ given the value of $x+\frac1x$.
- Quadratic extensions of $\mathbb Q$
- limsup and liminf of sequence of intervals in $\mathbb R$ (2)
- Three-dimensional simple Lie algebras over the rationals
- The cardinality of $\mathbb{R}/\mathbb Q$
- Use Euclid's Algorithm to find the multiplicative inverse
- Proof by induction $\frac1{1 \cdot 2} + \frac1{2 \cdot 3} + \frac1{3 \cdot 4} + \cdots + \frac1{n \cdot (n+1)} = \frac{n}{n+1}$
- Points in unit square
- Differences between pure and impure set theory?