What is the difference between all types of Markov Chains?

I have been looking for some good material covering Markov Chains but everything seems so difficult to me…
After reading about the subject, I figured out that there is basically three kinds of processes: discrete-time, continuous-time and decision Markov Processes.
I started reading Introduction to Probability Models, Tenth Edition, from Sheldon M. Ross, about discrete-time processes and then, after judging myself introduced to the subject tried to read about continuous-time processes and got confused.
Could someone give me an explanation about the subject (like the other Markov Chains topic: What is a Markov Chain?), focusing on differences and at least one simple application ?

I also would be glad if you could suggest books that don’t use a very complicated language to explain the subject. The book I cited above seems cool (although I have lots of doubts over the formulas/proofs and the math notation) but there are too many people telling the book doesn’t explain the subject properly.

Thank you very much!

Solutions Collecting From Web of "What is the difference between all types of Markov Chains?"

A Markov chain is a discrete-valued Markov process. Discrete-valued means that the state space of possible values of the Markov chain is finite or countable. A Markov process is basically a stochastic process in which the past history of the process is irrelevant if you know the current system state. In other words, all information about the past and present that would be useful in saying something about the future is contained in the present state.

A discrete-time Markov chain is one in which the system evolves through discrete time steps. So changes to the system can only happen at one of those discrete time values. An example is a board game like Chutes and Ladders (apparently called “Snakes and Ladders” outside the U.S.) in which pieces move around on the board according to a die roll. If you are looking at the board at the beginning of someone’s turn and wondering what the board will look like at the beginning of the next person’s turn, it doesn’t matter how the pieces arrived at their current positions (the past history of the system). All that matters is that the pieces are where they currently are (the current system state) and the upcoming die roll (the probabilistic aspect). This is discrete because changes to the system state can only happen on someone’s turn.

A continuous-time Markov chain is one in which changes to the system can happen at any time along a continuous interval. An example is the number of cars that have visited a drive-through at a local fast-food restaurant during the day. A car can arrive at any time $t$ rather than at discrete time intervals. Since arrivals are basically independent, if you know the number of cars that have gone through by 10:00 a.m., what happened before 10:00 a.m. doesn’t give you any additional information that would be useful in predicting the number of cars that will have visited the drive-through by, say, noon. (This is under the usual but reasonable assumption that the arrivals to the drive-through follow a Poisson process.)

A Markov decision process is just a Markov chain that includes an agent that makes decisions that affect the evolution of the system over time. (So I don’t think of it as a separate kind of Markov chain, since the usual Markov chain definition doesn’t include such an agent.) A (continuous-time) example would be the potato chip inventory for a local grocery store. If you know the inventory level at 10:00 a.m. and are trying to predict the inventory level at noon, the inventory levels before 10:00 a.m. don’t tell you anything beyond what you already know about the level at 10:00 a.m. The decision aspect arises because the manager can decide when to place orders so that bags arrive at certain times. Thus the inventory level at any time $t$ depends not just on (the probabilistic aspect of) customers arriving randomly and taking bags off the shelf but also on the manager’s (deterministic) decisions. (An example of a discrete-time Markov decision process is the board game Parcheesi. The board position at the beginning of the next player’s turn depends only on the current board position, the current player’s dice roll (the Markov chain aspect) and the current player’s decision as to which pieces to move based on the dice roll (the decision process aspect).)

markov chain is a special type of stochastic process where the outcome of an xperiment depends only on the outcome of the previous xperiment. It can be found in natural and social sciences e.g a random walker and the number of each species of an ecosystem in a given year.

There are some excellent sources on the web. IMO one of the best is the notes of David Anderson at the University of Wisconsin, although they may be more advanced than what you are looking for.

I’ll give you my heuristic view of continuous versus discrete Markov processes.
I assume from your post that you are familiar with the discrete case. To go to the continuous case, you superimpose a mechanism that dictates when the process changes. Think that each state has its own alarm clock, that goes off that goes off with an exponential distribution. This distribution is necessary to preserve the Markov property. When the alarm goes off, the state must change. The state to which it changes is governed by a discrete markov transition matrix with P(i,i)=0.(So that the process must change states) Hope this helps