A Markov chain is a random process describing a sequence of events in which the future depends only on the present and not on the past, which is sometimes known as the Markov condition, and which can be formulated in various ways (see image above). The space of values that a Markov chain can take at any given time t is referred to as the state space of the Markov chain. One strength of Markov chains is that they are easy to describe analytically. This is particularly so in the case where both time and the state space underlying the Markov chain are discrete and finite. In this setting, Markov chains can be fully described numerically by their corresponding transition matrix, an NxN matrix P where Pij specifies the probability of making a transition from state si at time t to state sj at time t+1 (see left image below). One key property of such matrices is that they have non-negative entries and rows that sum to 1. An equivalent way to represent a Markov chain is using a graph (see right image below), in which each vertex corresponds to a state and the edges represent the probabilities of transitioning between states. The graph and matrix representations of Markov chains are unified by the mathematical framework spectral graph theory, which allows various properties of transition matrices to be related to the underlying transition structure of a Markov chain. One prime example of this are the eigenvalues and eigenvectors of transition matrices, which are the prime focus of this tutorial.