Intereting Posts

Is there finest topology which makes given vector space into a topological vector space?
Localisation is isomorphic to a quotient of polynomial ring
Integral of Sinc Function Squared Over The Real Line
A community project: prove (or disprove) that $\sum_{n\geq 1}\frac{\sin(2^n)}{n}$ is convergent
Closed form for $\sum_{k=1}^{\infty} \zeta(2k)-\zeta(2k+1)$
Who can prove that a triangular number cannot be a cube, fourth power or fifth power?
Underdetermined linear systems least squares
It's a complicated case of simple dice roll.
Evaluate $ \binom{n}{0}+\binom{n}{2}+\binom{n}{4}+\cdots+\binom{n}{2k}+\cdots$
Visualizing Commutator of Two Vector Fields
Conservation Laws: Difference and Reasonability Weak solutions, Integral solutions, distributional solutions
Measurable rectangles inside a non-null set
Noetherian Jacobson rings
Why is the derivative of a vector orthogonal to the vector itself?
Non-inductive, not combinatorial proof of $\sum_{i \mathop = 0}^n \binom n i^2 = \binom {2 n} n$

I have been looking for some good material covering Markov Chains but everything seems so difficult to me…

After reading about the subject, I figured out that there is basically three kinds of processes: discrete-time, continuous-time and decision Markov Processes.

I started reading Introduction to Probability Models, Tenth Edition, from Sheldon M. Ross, about discrete-time processes and then, after judging myself introduced to the subject tried to read about continuous-time processes and got confused.

Could someone give me an explanation about the subject (like the other Markov Chains topic: What is a Markov Chain?), focusing on differences and at least one simple application ?

I also would be glad if you could suggest books that don’t use a very complicated language to explain the subject. The book I cited above seems cool (although I have lots of doubts over the formulas/proofs and the math notation) but there are too many people telling the book doesn’t explain the subject properly.

Thank you very much!

- Example of a Markov chain transition matrix that is not diagonalizable?
- Nice references on Markov chains/processes?
- How can a Markov chain be written as a measure-preserving dynamic system
- Getting at least $k$ heads in a row $l$ times over after at most $m$ flips.
- Probability distribution for the position of a biased random walker on the positive integers
- Finite State Markov Chain Stationary Distribution

- Fitting of Closed Curve in the Polar Coordinate.
- Generalization of variance to random vectors
- question involving Markov chain
- The pseudoness of pseudorandom number generators
- Expected Value of the maximum of two exponentially distributed random variables
- Computational methods for the limiting distribution of a finite ergodic Markov chain
- Maximum of the Variance Function for Given Set of Bounded Numbers
- relative size of most factors of semiprimes, close?
- Probability and Statistics, Penicillin is grown in a broth whose sugar content must be carefully controlled.
- Why do we use a Least Squares fit?

A *Markov chain* is a discrete-valued Markov process. *Discrete-valued* means that the state space of possible values of the Markov chain is finite or countable. A *Markov process* is basically a stochastic process in which the past history of the process is irrelevant if you know the current system state. In other words, all information about the past and present that would be useful in saying something about the future is contained in the present state.

A *discrete-time Markov chain* is one in which the system evolves through discrete time steps. So changes to the system can only happen at one of those discrete time values. An example is a board game like Chutes and Ladders (apparently called “Snakes and Ladders” outside the U.S.) in which pieces move around on the board according to a die roll. If you are looking at the board at the beginning of someone’s turn and wondering what the board will look like at the beginning of the next person’s turn, it doesn’t matter how the pieces arrived at their current positions (the past history of the system). All that matters is that the pieces are where they currently are (the current system state) and the upcoming die roll (the probabilistic aspect). This is discrete because changes to the system state can only happen on someone’s turn.

A *continuous-time Markov chain* is one in which changes to the system can happen at any time along a continuous interval. An example is the number of cars that have visited a drive-through at a local fast-food restaurant during the day. A car can arrive at any time $t$ rather than at discrete time intervals. Since arrivals are basically independent, if you know the number of cars that have gone through by 10:00 a.m., what happened before 10:00 a.m. doesn’t give you any additional information that would be useful in predicting the number of cars that will have visited the drive-through by, say, noon. (This is under the usual but reasonable assumption that the arrivals to the drive-through follow a Poisson process.)

A *Markov decision process* is just a Markov chain that includes an agent that makes decisions that affect the evolution of the system over time. (So I don’t think of it as a separate kind of Markov chain, since the usual Markov chain definition doesn’t include such an agent.) A (continuous-time) example would be the potato chip inventory for a local grocery store. If you know the inventory level at 10:00 a.m. and are trying to predict the inventory level at noon, the inventory levels before 10:00 a.m. don’t tell you anything beyond what you already know about the level at 10:00 a.m. The decision aspect arises because the manager can decide when to place orders so that bags arrive at certain times. Thus the inventory level at any time $t$ depends not just on (the probabilistic aspect of) customers arriving randomly and taking bags off the shelf but also on the manager’s (deterministic) decisions. (An example of a discrete-time Markov decision process is the board game Parcheesi. The board position at the beginning of the next player’s turn depends only on the current board position, the current player’s dice roll (the Markov chain aspect) and the current player’s decision as to which pieces to move based on the dice roll (the decision process aspect).)

markov chain is a special type of stochastic process where the outcome of an xperiment depends only on the outcome of the previous xperiment. It can be found in natural and social sciences e.g a random walker and the number of each species of an ecosystem in a given year.

There are some excellent sources on the web. IMO one of the best is the notes of David Anderson at the University of Wisconsin, although they may be more advanced than what you are looking for.

I’ll give you my heuristic view of continuous versus discrete Markov processes.

I assume from your post that you are familiar with the discrete case. To go to the continuous case, you superimpose a mechanism that dictates when the process changes. Think that each state has its own alarm clock, that goes off that goes off with an exponential distribution. This distribution is necessary to preserve the Markov property. When the alarm goes off, the state must change. The state to which it changes is governed by a discrete markov transition matrix with P(i,i)=0.(So that the process must change states) Hope this helps

- What is the agreed upon definition of a “positive definite matrix”?
- Closed-forms of infinite series with factorial in the denominator
- If $Var(X)=0$ then is $X$ a constant?
- Prove elements generate a free group
- Find all primes $p$ such that $\dfrac{(2^{p-1}-1)}{p}$ is a perfect square
- Proving an alternating Euler sum: $\sum_{k=1}^{\infty} \frac{(-1)^{k+1} H_k}{k} = \frac{1}{2} \zeta(2) – \frac{1}{2} \log^2 2$
- Question on differentiability of a continuous function
- What's the intuition behind the identities $\cos(z)= \cosh(iz)$ and $\sin(z)=-i\sinh(iz)$?
- Induction proof: n lines in a plane
- Subgroup generated by $2$ and $7$ in $(\mathbb Z,+)$
- $p_n(x)=p_{n-1}(x)+p_{n-1}^{\prime}(x)$, then all the roots of $p_k(x)$ are real
- Convergence in measure of products
- A limit problem related to $\log \sec x$
- How to prove that $\lim(\underset{k\rightarrow\infty}{\lim}(\cos(|n!\pi x|)^{2k}))=\chi_\mathbb{Q}$
- Problem solving Logical Equivalence Question