# So you're given a Transition Matrix to a Stationary Discrete Time Markov Chain, you're stoked!
### What does this mean and how do you know that you have a Stationary Discrete Time Markov Chain on your hands in the first place?? (we'll refer to it as an SDTMC from now on) Well there are 3 conditions needed to have an SDTMC, the first being that your matrix is Stationary. This means the elements of your matrix do not change whether you are on step 1, step 2, step 3, etc...
##### A great example of "Stationary probabilities" would be if you're playing a game like Shoots and Ladders: when you get sent back to a space you were already on previously - the probabilities for what space you will hit next is the same as the last time you were at that space (and if you get sent back there again and again they are still the same).
###### [image source](http://www.datagenetics.com/blog/november12011/)
### But maybe now you're thinking what are steps? That is where **discrete time** comes in; the second condition that has to be met. For a chain to have discrete time steps then each step of your chain needs to be it's own event; something that either happens or doesn't happen. No in between.
##### The number of dice you roll? Those are discrete time steps. The number of turns in a game of monopoly? Those are discrete time steps too. You can't have half a turn in monopoly, you either go your turn or you don't go your turn and likewise with rolling dice. An example of a non-discrete-time steps (i.e. continuous-time steps) would be measurements, interval, or approximations. With continuous-time steps things might not wholly have happened; for instance 0.654321... inches of rain fell yesterday and 0.365432... fell today and each step of our process is 1 full inch of rainfall. That means that at some point in time between yesterday and today we transitioned from one step to the next but at no point in time did 1 inch of rain just happen, are steps were continuous.
### Finally the last condition needed to satisfy an SDTMC is the Markov Property, this is the condition wherein knowing the most recent state that you've occupied is all you need to know about calculating the probability of where you'll end up next.
##### Say you're keeping track of the sum of your dice rolls, then knowing that your last roll made the sum total of your rolls 10 means that whether or not you knew what you rolled prior to the last roll won't change the probability of each of your potential sums after your next roll.
# So now that we know what an SDTMC is what can we do with it?
### We can do lots of things with an SDTMC: we can figure out the probability that you end up in a given state having started from another state.
###### sidenote I really should have learned LaTeX by now, uploading pictures of my scratch work will have to suffice for now though!
## Take a look at the following Jump Diagram
## We can represent the probabilities of the transitions between states in the form of a Matrix like this
## If we are currently in state 1 and want to know the probability of where we will end up next we simply look at the elements in row 1.
## In this case let's say we are looking for the probability that are next step leads us to state 2. This is simply just the element in the cross section of the __1st__ row (our current state) and the __2nd__ column (where we want to go next).
### As you can see the probability of going from state 1 to state 2 is 1/6th. That wasn't too hard, but when we want to find the probability that after 2 steps you will end up in state 2 given that you are in state 1 right now then things become a little more tricky.
### To calculate this 2-step transition probability we need to square our transition matrix
##### after some calculations (*cough* goggling a matrix calculator *cough*) we end up with this transition probability matrix
### Now that we've got our matrix it is once again the same case of finding the cross-section for the row of our current state with the column of the state we want to go to next
### Voila, are probability of being at state 1 and ending up at state 2 two moves from now is 11/36 or 0.3055
## But what if we want to know the 10-step transition probability or the 100-step transition probability?
## Or what if our jump diagram was more complicated like, say this, making even a two-step transition matrix hard to find?
### This wouldn't be possible to do by hand (in a reasonable amount of time)..
# So that's where R comes in handy!
##### [If you don't have R Studio installed then you should get on that first](https://www.r-project.org/)
#### Now you've opened R Studio all you have to do type in your matrix (we'll keep using our first example for now)
matrix_name = matrix(c(1/6,1/6,2/3,1/6,1/3,1/3,2/3,1/2,0),nrow=3,ncol=3)
Now a simple `matrix_name` + 'Enter' should print out your matrix like this.
Using a built in library called expm we run the following and hit 'Enter':
Now if we want the transition probability matrix for any 'i' number of steps all we have to do is type in the following:
``` matrix_name %^% i ```
For example here is the 10-step transition probability matrix for our example:
## But wait there's more!
#### You can also use R to easily find the invariant distribution of an SDTMC
##### The invariant distribution tells us how long on average each state is occupied during the long-run of our process (i.e. if we played the game forever we would actually be able to find what percentage of time the process spent in each state) If we want to find the invariant distribution (or even check to see if it exists in the first place) we can check what the transition probability matrix is for a very large number of steps (say 100) then add 1 to your number of steps and see if you get the same exact result. Here's our example
##### Since are example wasn't too complicated this worked out on the first try but if your results aren't the same then multiply the number of steps by 10 and repeat your process. Eventually you will start getting the same exact transition probability matrix after incrementing by 1 (if this isn't the case than the invariant distribution most likely doesn't exist for your process!)
### So we know that our process spends 35.3% of the time in state 1, 27.4% of the time in state 2, and 37.3% of the time in state 3 over the long run but what else can it tell us?
##### The invariant distribution is awesome because it also can answer questions like "What is the average number of steps it takes for a state to be hit?" or "What is the probability that the process will fall on a certain state after a long period of time has passed without knowing any of it's previous steps?" In the case of the first question the average number of steps it takes for a process to leave a given state, say 'i', and then return to state 'i' is 1/(π) where π represents the invariant distribution probability of state 'i'. In our example this means the average number of steps between hitting state 1 is 2.83 steps, the average number of steps between hitting state 2 is 3.65 steps, and the average number of steps between hitting state 3 is 2.69 steps.
To answer the second question, well despite not knowing any of the prior states we've entered, the probability that we will hit a given state in the long-run is simply the invariant distribution π for that state. Essentially this means that the probability that we will land in state 1, state 2, or state 3, after an infinite (or just very large) number os steps will be unchanged whether we started in state 1, state 2, or state 3. Pretty cool!