A Hierarchical model for Rugby prediction#

In this example, we’re going to reproduce the first model described in Baio and Blangiardo [2010] using PyMC. Then show how to sample from the posterior predictive to simulate championship outcomes from the scored goals which are the modeled quantities.

We apply the results of the paper to the Six Nations Championship, which is a competition between Italy, Ireland, Scotland, England, France and Wales.

Motivation#

Your estimate of the strength of a team depends on your estimates of the other strengths

Ireland are a stronger team than Italy for example - but by how much?

Source for Results 2014 are Wikipedia. I’ve added the subsequent years, 2015, 2016, 2017. Manually pulled from Wikipedia.

  • We want to infer a latent parameter - that is the ‘strength’ of a team based only on their scoring intensity, and all we have are their scores and results, we can’t accurately measure the ‘strength’ of a team.

  • Probabilistic Programming is a brilliant paradigm for modeling these latent parameters

  • Aim is to build a model for the upcoming Six Nations in 2018.

Attention

This notebook uses libraries that are not PyMC dependencies and therefore need to be installed specifically to run this notebook. Open the dropdown below for extra guidance.

Extra dependencies install instructions

In order to run this notebook (either locally or on binder) you won’t only need a working PyMC installation with all optional dependencies, but also to install some extra dependencies. For advise on installing PyMC itself, please refer to Installation

You can install these dependencies with your preferred package manager, we provide as an example the pip and conda commands below.

$ pip install seaborn numba

Note that if you want (or need) to install the packages from inside the notebook instead of the command line, you can install the packages by running a variation of the pip command:

import sys

!{sys.executable} -m pip install seaborn numba

You should not run !pip install as it might install the package in a different environment and not be available from the Jupyter notebook even if installed.

Another alternative is using conda instead:

$ conda install seaborn numba

when installing scientific python packages with conda, we recommend using conda forge

!date

import arviz as az
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import pymc as pm
import pytensor.tensor as pt
import seaborn as sns

from matplotlib.ticker import StrMethodFormatter

%matplotlib inline
sáb 02 abr 2022 03:24:55 EEST
az.style.use("arviz-darkgrid")
plt.rcParams["figure.constrained_layout.use"] = False

This is a Rugby prediction exercise. So we’ll input some data. We’ve taken this from Wikipedia and BBC sports.

try:
    df_all = pd.read_csv("../data/rugby.csv", index_col=0)
except:
    df_all = pd.read_csv(pm.get_data("rugby.csv"), index_col=0)

What do we want to infer?#

  • We want to infer the latent parameters (every team’s strength) that are generating the data we observe (the scorelines).

  • Moreover, we know that the scorelines are a noisy measurement of team strength, so ideally, we want a model that makes it easy to quantify our uncertainty about the underlying strengths.

  • Often we don’t know what the Bayesian Model is explicitly, so we have to ‘estimate’ the Bayesian Model’

  • If we can’t solve something, approximate it.

  • Markov-Chain Monte Carlo (MCMC) instead draws samples from the posterior.

  • Fortunately, this algorithm can be applied to almost any model.

What do we want?#

  • We want to quantify our uncertainty

  • We want to also use this to generate a model

  • We want the answers as distributions not point estimates

Visualization/EDA#

We should do some some exploratory data analysis of this dataset.

The plots should be fairly self-explantory, we’ll look at things like difference between teams in terms of their scores.

df_all.describe()
home_score away_score year
count 60.000000 60.000000 60.000000
mean 23.500000 19.983333 2015.500000
std 14.019962 12.911028 1.127469
min 0.000000 0.000000 2014.000000
25% 16.000000 10.000000 2014.750000
50% 20.500000 18.000000 2015.500000
75% 27.250000 23.250000 2016.250000
max 67.000000 63.000000 2017.000000
# Let's look at the tail end of this dataframe
df_all.tail()
home_team away_team home_score away_score year
55 Italy France 18 40 2017
56 England Scotland 61 21 2017
57 Scotland Italy 29 0 2017
58 France Wales 20 18 2017
59 Ireland England 13 9 2017

There are a few things here that we don’t need. We don’t need the year for our model. But that is something that could improve a future model.

Firstly let us look at differences in scores by year.

df_all["difference"] = np.abs(df_all["home_score"] - df_all["away_score"])
(
    df_all.groupby("year")["difference"]
    .mean()
    .plot(
        kind="bar",
        title="Average magnitude of scores difference Six Nations",
        yerr=df_all.groupby("year")["difference"].std(),
    )
    .set_ylabel("Average (abs) point difference")
);
../_images/57717864299d85cf1955cf3751e75053d89e5663ae2d964ed7ac5726e92a14e7.png

We can see that the standard error is large. So we can’t say anything about the differences. Let’s look country by country.

df_all["difference_non_abs"] = df_all["home_score"] - df_all["away_score"]

Let us first loook at a Pivot table with a sum of this, broken down by year.

df_all.pivot_table("difference_non_abs", "home_team", "year")
year 2014 2015 2016 2017
home_team
England 7.000000 20.666667 7.500000 21.333333
France 6.666667 0.000000 -2.333333 4.000000
Ireland 28.000000 8.500000 17.666667 7.000000
Italy -21.000000 -31.000000 -23.500000 -33.666667
Scotland -11.000000 -12.000000 2.500000 16.666667
Wales 25.666667 1.000000 22.000000 4.000000

Now let’s first plot this by home team without year.

(
    df_all.pivot_table("difference_non_abs", "home_team")
    .rename_axis("Home_Team")
    .plot(kind="bar", rot=0, legend=False)
    .set_ylabel("Score difference Home team and away team")
);
../_images/312638f1ea94d2ee490d653b72da1808b07728889ed89fcb1b662ed735207b6b.png

You can see that Italy and Scotland have negative scores on average. You can also see that England, Ireland and Wales have been the strongest teams lately at home.

(
    df_all.pivot_table("difference_non_abs", "away_team")
    .rename_axis("Away_Team")
    .plot(kind="bar", rot=0, legend=False)
    .set_ylabel("Score difference Home team and away team")
);
../_images/073493a914f5a6731dfc88ac18337e2370a1edbce1018a08c4b5814ebeee5b56.png

This indicates that Italy, Scotland and France all have poor away from home form. England suffers the least when playing away from home. This aggregate view doesn’t take into account the strength of the teams.

Let us look a bit more at a timeseries plot of the average of the score difference over the year.

We see some changes in team behaviour, and we also see that Italy is a poor team.

g = sns.FacetGrid(df_all, col="home_team", col_wrap=2, height=5)
g.map(sns.scatterplot, "year", "difference_non_abs")
g.fig.autofmt_xdate()
../_images/939e2b380b884b82d7217fdcc914c31020860ff77da6abf85c6f45f347959e44.png
g = sns.FacetGrid(df_all, col="away_team", col_wrap=2, height=5)
g = g.map(plt.scatter, "year", "difference_non_abs").set_axis_labels("Year", "Score Difference")
g.fig.autofmt_xdate()
../_images/566216cf17d8dd5ac93c5f470101536d284162626dd9fc60f22e92a957fedd24.png

You can see some interesting things here like Wales were good away from home in 2015. In that year they won three games away from home and won by 40 points or so away from home to Italy.

So now we’ve got a feel for the data, we can proceed on with describing the model.

What assumptions do we know for our ‘generative story’?#

  • We know that the Six Nations in Rugby only has 6 teams - they each play each other once

  • We have data from the last few years

  • We also know that in sports scoring is modelled as a Poisson distribution

  • We consider home advantage to be a strong effect in sports

The model.#

The league is made up by a total of T= 6 teams, playing each other once in a season. We indicate the number of points scored by the home and the away team in the g-th game of the season (15 games) as \(y_{g1}\) and \(y_{g2}\) respectively.

The vector of observed counts \(\mathbb{y} = (y_{g1}, y_{g2})\) is modelled as independent Poisson: \(y_{gi}| \theta_{gj} \tilde\;\; Poisson(\theta_{gj})\) where the theta parameters represent the scoring intensity in the g-th game for the team playing at home (j=1) and away (j=2), respectively.

We model these parameters according to a formulation that has been used widely in the statistical literature, assuming a log-linear random effect model: $\(log \theta_{g1} = home + att_{h(g)} + def_{a(g)} \)\( \)\(log \theta_{g2} = att_{a(g)} + def_{h(g)}\)$

  • The parameter home represents the advantage for the team hosting the game and we assume that this effect is constant for all the teams and throughout the season

  • The scoring intensity is determined jointly by the attack and defense ability of the two teams involved, represented by the parameters att and def, respectively

  • Conversely, for each t = 1, …, T, the team-specific effects are modelled as exchangeable from a common distribution:

  • \(att_{t} \; \tilde\;\; Normal(\mu_{att},\tau_{att})\) and \(def_{t} \; \tilde\;\;Normal(\mu_{def},\tau_{def})\)

  • We did some munging above and adjustments of the data to make it tidier for our model.

  • The log function to away scores and home scores is a standard trick in the sports analytics literature

Building of the model#

We now build the model in PyMC, specifying the global parameters, the team-specific parameters and the likelihood function

plt.rcParams["figure.constrained_layout.use"] = True
home_idx, teams = pd.factorize(df_all["home_team"], sort=True)
away_idx, _ = pd.factorize(df_all["away_team"], sort=True)
coords = {"team": teams}
with pm.Model(coords=coords) as model:
    # constant data
    home_team = pm.ConstantData("home_team", home_idx, dims="match")
    away_team = pm.ConstantData("away_team", away_idx, dims="match")

    # global model parameters
    home = pm.Normal("home", mu=0, sigma=1)
    sd_att = pm.HalfNormal("sd_att", sigma=2)
    sd_def = pm.HalfNormal("sd_def", sigma=2)
    intercept = pm.Normal("intercept", mu=3, sigma=1)

    # team-specific model parameters
    atts_star = pm.Normal("atts_star", mu=0, sigma=sd_att, dims="team")
    defs_star = pm.Normal("defs_star", mu=0, sigma=sd_def, dims="team")

    atts = pm.Deterministic("atts", atts_star - pt.mean(atts_star), dims="team")
    defs = pm.Deterministic("defs", defs_star - pt.mean(defs_star), dims="team")
    home_theta = pt.exp(intercept + home + atts[home_idx] + defs[away_idx])
    away_theta = pt.exp(intercept + atts[away_idx] + defs[home_idx])

    # likelihood of observed data
    home_points = pm.Poisson(
        "home_points",
        mu=home_theta,
        observed=df_all["home_score"],
        dims=("match"),
    )
    away_points = pm.Poisson(
        "away_points",
        mu=away_theta,
        observed=df_all["away_score"],
        dims=("match"),
    )
    trace = pm.sample(1000, tune=1500, cores=4)
Auto-assigning NUTS sampler...
Initializing NUTS using jitter+adapt_diag...
/home/oriol/miniconda3/envs/arviz/lib/python3.9/site-packages/pymc/pytensorf.py:1005: UserWarning: The parameter 'updates' of pytensor.function() expects an OrderedDict, got <class 'dict'>. Using a standard dictionary here results in non-deterministic behavior. You should use an OrderedDict if you are using Python 2.7 (collections.OrderedDict for older python), or use a list of (shared, update) pairs. Do not just convert your dictionary to this type before the call as the conversion will still be non-deterministic.
  pytensor_function = pytensor.function(
Multiprocess sampling (4 chains in 4 jobs)
NUTS: [home, sd_att, sd_def, intercept, atts_star, defs_star]
100.00% [10000/10000 00:24<00:00 Sampling 4 chains, 0 divergences]
Sampling 4 chains for 1_500 tune and 1_000 draw iterations (6_000 + 4_000 draws total) took 25 seconds.
  • We specified the model and the likelihood function

  • All this runs on an PyTensor graph under the hood

az.plot_trace(trace, var_names=["intercept", "home", "sd_att", "sd_def"], compact=False);
../_images/b474d67ee573b73a8cf827149ca022cdae286f70455826b645a3a1082e5d173f.png

Let us apply good statistical workflow practices and look at the various evaluation metrics to see if our NUTS sampler converged.

az.plot_energy(trace, figsize=(6, 4));
../_images/85e7cddd04d03d973e2334552d62253b9694f250e44021e5f07e46003106eee6.png
az.summary(trace, kind="diagnostics")
mcse_mean mcse_sd ess_bulk ess_tail r_hat
home 0.001 0.001 2694.0 2250.0 1.0
intercept 0.001 0.000 2743.0 2455.0 1.0
atts_star[England] 0.004 0.003 1243.0 1121.0 1.0
atts_star[France] 0.004 0.003 1247.0 1111.0 1.0
atts_star[Ireland] 0.004 0.003 1208.0 1160.0 1.0
atts_star[Italy] 0.004 0.003 1352.0 1375.0 1.0
atts_star[Scotland] 0.004 0.003 1318.0 1178.0 1.0
atts_star[Wales] 0.004 0.003 1252.0 1122.0 1.0
defs_star[England] 0.007 0.005 1110.0 899.0 1.0
defs_star[France] 0.007 0.006 1091.0 876.0 1.0
defs_star[Ireland] 0.007 0.005 1113.0 881.0 1.0
defs_star[Italy] 0.007 0.006 1075.0 846.0 1.0
defs_star[Scotland] 0.007 0.006 1081.0 854.0 1.0
defs_star[Wales] 0.007 0.006 1098.0 889.0 1.0
sd_att 0.004 0.003 1767.0 1709.0 1.0
sd_def 0.006 0.004 1762.0 1662.0 1.0
atts[England] 0.001 0.000 5489.0 3204.0 1.0
atts[France] 0.001 0.000 5441.0 3286.0 1.0
atts[Ireland] 0.001 0.000 5257.0 3260.0 1.0
atts[Italy] 0.001 0.001 5157.0 2957.0 1.0
atts[Scotland] 0.001 0.001 4926.0 3003.0 1.0
atts[Wales] 0.001 0.000 4537.0 3306.0 1.0
defs[England] 0.001 0.001 4779.0 3293.0 1.0
defs[France] 0.001 0.001 4183.0 2917.0 1.0
defs[Ireland] 0.001 0.001 4766.0 3227.0 1.0
defs[Italy] 0.001 0.000 4419.0 3561.0 1.0
defs[Scotland] 0.001 0.000 4068.0 3290.0 1.0
defs[Wales] 0.001 0.001 4177.0 3164.0 1.0

Our model has converged well and \(\hat{R}\) looks good.

Let us look at some of the stats, just to verify that our model has returned the correct attributes. We can see that some teams are stronger than others. This is what we would expect with attack

trace_hdi = az.hdi(trace)
trace_hdi["atts"]
<xarray.DataArray 'atts' (team: 6, hdi: 2)>
array([[ 0.18302232,  0.33451681],
       [-0.16672247,  0.00056686],
       [ 0.02480029,  0.18070957],
       [-0.44275554, -0.2336128 ],
       [-0.20865349, -0.03104828],
       [ 0.09821099,  0.25031702]])
Coordinates:
  * team     (team) <U8 'England' 'France' 'Ireland' 'Italy' 'Scotland' 'Wales'
  * hdi      (hdi) <U6 'lower' 'higher'
trace.posterior["atts"].median(("chain", "draw"))
<xarray.DataArray 'atts' (team: 6)>
array([ 0.25676819, -0.08392998,  0.10809197, -0.33498374, -0.11633869,
        0.17182582])
Coordinates:
  * team     (team) <U8 'England' 'France' 'Ireland' 'Italy' 'Scotland' 'Wales'

Results#

From the above we can start to understand the different distributions of attacking strength and defensive strength. These are probabilistic estimates and help us better understand the uncertainty in sports analytics

_, ax = plt.subplots(figsize=(12, 6))

ax.scatter(teams, trace.posterior["atts"].median(dim=("chain", "draw")), color="C0", alpha=1, s=100)
ax.vlines(
    teams,
    trace_hdi["atts"].sel({"hdi": "lower"}),
    trace_hdi["atts"].sel({"hdi": "higher"}),
    alpha=0.6,
    lw=5,
    color="C0",
)
ax.set_xlabel("Teams")
ax.set_ylabel("Posterior Attack Strength")
ax.set_title("HDI of Team-wise Attack Strength");
../_images/cb76b6c036519b7b7bb69d9e0eceb6ec43c5c788a8d1135119dfad5854153e08.png

This is one of the powerful things about Bayesian modelling, we can have uncertainty quantification of some of our estimates. We’ve got a Bayesian credible interval for the attack strength of different countries.

We can see an overlap between Ireland, Wales and England which is what you’d expect since these teams have won in recent years.

Italy is well behind everyone else - which is what we’d expect and there’s an overlap between Scotland and France which seems about right.

There are probably some effects we’d like to add in here, like weighting more recent results more strongly. However that’d be a much more complicated model.

# subclass arviz labeller to omit the variable name
class TeamLabeller(az.labels.BaseLabeller):
    def make_label_flat(self, var_name, sel, isel):
        sel_str = self.sel_to_str(sel, isel)
        return sel_str
ax = az.plot_forest(trace, var_names=["atts"], labeller=TeamLabeller())
ax[0].set_title("Team Offense");
../_images/b486975aa2d04f14169fc13ea453406cdd4bd9f2994e88a2cfa06b22d6db6445.png
ax = az.plot_forest(trace, var_names=["defs"], labeller=TeamLabeller())
ax[0].set_title("Team Defense");
../_images/9d8790cfd25fc774dae2dfdf491ccd59386a3f5949601ed4970934da9d3a7884.png

Good teams like Ireland and England have a strong negative effect defense. Which is what we expect. We expect our strong teams to have strong positive effects in attack and strong negative effects in defense.

This approach that we’re using of looking at parameters and examining them is part of a good statistical workflow. We also think that perhaps our priors could be better specified. However this is beyond the scope of this article. We recommend for a good discussion of ‘statistical workflow’ you visit Robust Statistical Workflow with RStan

Let’s do some other plots. So we can see our range for our defensive effect. I’ll print the teams below too just for reference

az.plot_posterior(trace, var_names=["defs"]);
../_images/ca5591b1694805ce8fedbc433d47889df91961a26a5093dd453be480e3bb7168.png

We can see that Ireland’s mean is -0.39 which means we expect Ireland to have a strong defense. Which is what we’d expect, Ireland generally even in games it loses doesn’t lose by say 50 points. And we can see that the 94% HDI is between -0.491, and -0.28

In comparison with Italy, we see a strong positive effect 0.58 mean and a HDI of 0.51 and 0.65. This means that we’d expect Italy to concede a lot of points, compared to what it scores. Given that Italy often loses by 30 - 60 points, this seems correct.

We see here also that this informs what other priors we could bring into this. We could bring some sort of world ranking as a prior.

As of December 2017 the rugby rankings indicate that England is 2nd in the world, Ireland 3rd, Scotland 5th, Wales 7th, France 9th and Italy 14th. We could bring that into a model and it can explain some of the fact that Italy is apart from a lot of the other teams.

Now let’s simulate who wins over a total of 4000 simulations, one per sample in the posterior.

with model:
    pm.sample_posterior_predictive(trace, extend_inferencedata=True)
pp = trace.posterior_predictive
const = trace.constant_data
team_da = trace.posterior.team
/home/oriol/miniconda3/envs/arviz/lib/python3.9/site-packages/pymc/pytensorf.py:1005: UserWarning: The parameter 'updates' of pytensor.function() expects an OrderedDict, got <class 'dict'>. Using a standard dictionary here results in non-deterministic behavior. You should use an OrderedDict if you are using Python 2.7 (collections.OrderedDict for older python), or use a list of (shared, update) pairs. Do not just convert your dictionary to this type before the call as the conversion will still be non-deterministic.
  pytensor_function = pytensor.function(
100.00% [4000/4000 00:00<00:00]

The posterior predictive samples contain the goals scored by each team in each match. We modeled and therefore simulated according to scoring and devensive powers using goals as observed variable.

Our goal now is to see who wins the competition, so we can estimate the probability each team has of winning the whole competition. From that we need to convert the scored goals to points:

# fmt: off
pp["home_win"] = (
    (pp["home_points"] > pp["away_points"]) * 3     # home team wins and gets 3 points
    + (pp["home_points"] == pp["away_points"]) * 2  # tie -> home team gets 2 points
)
pp["away_win"] = (
    (pp["home_points"] < pp["away_points"]) * 3
    + (pp["home_points"] == pp["away_points"]) * 2
)
# fmt: on

Then add the points each team has collected throughout all matches:

groupby_sum_home = pp.home_win.groupby(team_da[const.home_team]).sum()
groupby_sum_away = pp.away_win.groupby(team_da[const.away_team]).sum()

pp["teamscores"] = groupby_sum_home + groupby_sum_away

And eventually generate the ranks of all teams for each of the 4000 simulations. As our data is stored in xarray objects inside the InferenceData class, we will use xarray-einstats:

from xarray_einstats.stats import rankdata

pp["rank"] = rankdata(-pp["teamscores"], dims="team", method="min")
pp[["rank"]].sel(team="England")
<xarray.Dataset>
Dimensions:  (chain: 4, draw: 1000)
Coordinates:
  * chain    (chain) int64 0 1 2 3
  * draw     (draw) int64 0 1 2 3 4 5 6 7 8 ... 992 993 994 995 996 997 998 999
    team     <U7 'England'
Data variables:
    rank     (chain, draw) int64 2 1 2 2 2 1 1 1 2 1 1 ... 1 1 1 1 2 1 3 1 2 2 1
Attributes:
    created_at:                 2022-04-02T00:25:59.622442
    arviz_version:              0.12.0
    inference_library:          pymc
    inference_library_version:  4.0.0b6

As you can see, we now have a collection of 4000 integers between 1 and 6 for each team, 1 meaning they win the competition. We can use a histogram with bin edges at half integers to count and normalize how many times each team finishes in each position:

from xarray_einstats.numba import histogram

bin_edges = np.arange(7) + 0.5
data_sim = (
    histogram(pp["rank"], dims=("chain", "draw"), bins=bin_edges, density=True)
    .rename({"bin": "rank"})
    .assign_coords(rank=np.arange(6) + 1)
)

Now that we have reduced the data to a 2 dimensional array, we will convert it to a pandas DataFrame which is now a more adequate choice to work with our data:

idx_dim, col_dim = data_sim.dims
sim_table = pd.DataFrame(data_sim, index=data_sim[idx_dim], columns=data_sim[col_dim])
fig, ax = plt.subplots(figsize=(8, 4))
ax = sim_table.T.plot(kind="barh", ax=ax)
ax.xaxis.set_major_formatter(StrMethodFormatter("{x:.1%}"))
ax.set_xlabel("Rank-wise Probability of results for all six teams")
ax.set_yticklabels(np.arange(1, 7))
ax.set_ylabel("Ranks")
ax.invert_yaxis()
ax.legend(loc="best", fontsize="medium");
../_images/2167797281ecffd3c33297dc946f0965ce96a22641a54d06cff2cc15a46abc29.png

We see according to this model that Ireland finishes with the most points about 60% of the time, and England finishes with the most points 45% of the time and Wales finishes with the most points about 10% of the time. (Note that these probabilities do not sum to 100% since there is a non-zero chance of a tie atop the table.)

As an Irish rugby fan - I like this model. However it indicates some problems with shrinkage, and bias. Since recent form suggests England will win.

Nevertheless the point of this model was to illustrate how a Hierarchical model could be applied to a sports analytics problem, and illustrate the power of PyMC.

Covariates#

We should do some exploration of the variables

az.plot_pair(
    trace,
    var_names=["atts"],
    kind="scatter",
    divergences=True,
    textsize=25,
    marginals=True,
),
figsize = (10, 10)
../_images/2b9ecbddc564005675a58a17cb39290ef699f760f31133b448f2450fc187a2d8.png

We observe that there isn’t a lot of correlation between these covariates, other than the weaker teams like Italy have a more negative distribution of these variables. Nevertheless this is a good method to get some insight into how the variables are behaving.

Authors#

References#

[1] (1,2)

Gianluca Baio and Marta Blangiardo. Bayesian hierarchical model for the prediction of football results. Journal of Applied Statistics, 37(2):253–264, 2010.

Watermark#

%load_ext watermark
%watermark -n -u -v -iv -w -p xarray,aeppl,numba,xarray_einstats
Last updated: Sat Apr 02 2022

Python implementation: CPython
Python version       : 3.9.10
IPython version      : 8.0.1

xarray         : 2022.3.0
aeppl          : 0.0.27
numba          : 0.55.1
xarray_einstats: 0.2.0

matplotlib: 3.5.1
numpy     : 1.21.5
pandas    : 1.4.1
seaborn   : 0.11.2
pymc      : 4.0.0b6
pytensor    : 2.5.1
arviz     : 0.12.0

Watermark: 2.3.0

License notice#

All the notebooks in this example gallery are provided under the MIT License which allows modification, and redistribution for any use provided the copyright and license notices are preserved.

Citing PyMC examples#

To cite this notebook, use the DOI provided by Zenodo for the pymc-examples repository.

Important

Many notebooks are adapted from other sources: blogs, books… In such cases you should cite the original source as well.

Also remember to cite the relevant libraries used by your code.

Here is an citation template in bibtex:

@incollection{citekey,
  author    = "<notebook authors, see above>",
  title     = "<notebook title>",
  editor    = "PyMC Team",
  booktitle = "PyMC examples",
  doi       = "10.5281/zenodo.5654871"
}

which once rendered could look like: