Articles of markov process

Limiting probability of Markov chain

So the problem is: Let $X$ be a Markov chain with state space $E = ${a,b} and transition matrix $$p=\begin{bmatrix}0.4 &0.6 \\1 & 0 \end{bmatrix}$$ and suppose that a reward of $g(i,j)$ units is received for every jump from $i$ to $j$ where $$g=\begin{bmatrix}3 &2 \\-1 & 1 \end{bmatrix}$$ Find $$\lim_{n\to \infty}\frac 1{n+1}\sum_{m=0}^{n}g(X_m, X_{m+1})$$ The […]

Markov Chain and Forward and Backward Probabilities with Alice and Bob

System Alice and Bob are moving independently from one city to another. There are $d$ cities, the probability of moving to another city (for each individual) is $m$ and each move is equiprobable (there is no preferred city). The choice of moving and choice of where to move to of Alice are independent of the […]

Transition function for absorbed Brownian motion

I need an help with the following exercise. I’ve already seen this question Prove that Brownian Motion absorbed at the origin is Markov but I don’t understand the answer. Also I would like to prove the thing differently (at least the two ways seem different to me). So let $\tau$ be the first time the […]

“To every Q-matrix corresponds a unique Markov process.” Proving uniqueness

“To every Q-matrix corresponds a unique Markov process.” I’m trying to understand Klenke’s proof of the uniqueness part of this proposition. Klenke’s proof Following is an adapted version of Klenke’s proof of the uniqueness part. The original can be found here (“Now assume that…”). Let $(p_t)_{t\geq0}, (r_t)_{t\geq0}$ be Markov semigroups. Let $q$ be a bounded […]

Example of a Continuous-Time Markov Process which does NOT have Independent Increments

1. Given a discrete-time Markov chain without independent increments, is the embedding of it into a continuous time Markov chain (i.e. via the use of exponential waiting times) an example of a continuous time Markov process without independent increments? 2. Does there exist a continuous-time Markov process with continuous sample paths which does not have […]

Find the Stationary Distribution of an infinite state Markov chain

A Markov Chain on states 0,1,….. has transition probabilities $P_{ij}=1/(i+2)$ for j=0,1,….,i,i+1. I’m supposed to find the stationary distribution. So do I take the limit as n goes to infinity of 1/n multiplied by $P_{ij}$?

A proof by René Schilling that a continuous Lévy process is integrable

In his treatise “An Introduction to Lévy and Feller Processes” (arXiv link), Prof. Dr. René Schilling gives a short and seemingly straightforward proof for the claim that a continuous Lévy process is integrable (Lemma 8.2 on p. 50). More precisely, the claim goes as follows. Lemma Let $(X_t)_{t\geq0}$ be a Lévy process with càdlàg paths […]

If $A$ is the generator of $(P_t)$, then $A+f$ is the generator of $(P_t^f)$

Let $X=(X_t)_{t\geq0}$ be a Markov process on a state space $\Gamma$ (a Hausdorff topological vector space), let $A$ be the infinitesimal generator of $X$ and let $\mathcal C(\Gamma)$ the space of continuous functions on $\Gamma$. Then for each continuous function $f \in \mathcal C(\Gamma)$ the operator $A+f$ is the infinitesimal generator of the semigroup $(P_t^f)_{t\geq0}$ […]

Is there a solution to this system for the diagonal matrix?

I’m trying to find a solution to a system of equations, but its quite different from anything I’ve come across before. I believe there is a solution, but I could be wrong. $\mathbf{A} = \mathrm{diag}(\mathbf{a})$ is an $n \times n$ diagonal matrix whose diagonal entries $\mathbf{a}_i \in [0,1]$ are the unknown quantities in my system. […]

Markovian Gaussian stationary process with continuous paths

Could you, please, help me figure out the following problem. We call a stationary Gaussian process $\xi_t$ (with continuous paths) an Ornstein-Uhlenbeck process if its correlation function $\mathbb{E}\xi_{s+t}\bar \xi_s$ is $$ \mathbb{E}\xi_{s+t}\bar \xi_s=R(t)=R(0)e^{-\alpha|t|}, \alpha\ge 0 $$ The question is that Ornstein-Uhlenbeck process is the unique Markovian stationary Gaussian process. By Markovian I mean that $$ […]