DiscreteMarkovChain#

class pymc_experimental.distributions.DiscreteMarkovChain(*args, steps=None, n_lags=1, **kwargs)[source]#

A Discrete Markov Chain is a sequence of random variables

\[\{x_t\}_{t=0}^T\]

Where transition probability \(P(x_t | x_{t-1})\) depends only on the state of the system at \(x_{t-1}\).

Parameters:
  • P (tensor) – Matrix of transition probabilities between states. Rows must sum to 1. One of P or P_logits must be provided.

  • P_logit (tensor, optional) – Matrix of transition logits. Converted to probabilities via Softmax activation. One of P or P_logits must be provided.

  • steps (tensor, optional) – Length of the markov chain. Only needed if state is not provided.

  • init_dist (unnamed distribution, optional) –

    Vector distribution for initial values. Unnamed refers to distributions created with the .dist() API. Distribution should have shape n_states. If not, it will be automatically resized.

    Warning

    init_dist will be cloned, rendering it independent of the one passed as input.

Notes

The initial distribution will be cloned, rendering it distinct from the one passed as input.

Examples

Create a Markov Chain of length 100 with 3 states. The number of states is given by the shape of P, 3 in this case.

>>> with pm.Model() as markov_chain:
>>>     P = pm.Dirichlet("P", a=[1, 1, 1], size=(3,))
>>>     init_dist = pm.Categorical.dist(p = np.full(3, 1 / 3))
>>>     markov_chain = pm.DiscreteMarkovChain("markov_chain", P=P, init_dist=init_dist, shape=(100,))
__init__()#

Methods

__init__()

dist([P, logit_P, steps, init_dist, n_lags])

Creates a tensor variable corresponding to the cls distribution.

rv_op(P, steps, init_dist, n_lags[, size])