{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "(DEMetropolis_comparisons)=\n", "# DEMetropolis and DEMetropolis(Z) Algorithm Comparisons\n", ":::{post} January 18, 2023\n", ":tags: DEMetropolis, gradient-free inference\n", ":category: intermediate, how-to\n", ":author: Michael Osthege, Greg Brunkhorst\n", ":::" ] }, { "cell_type": "code", "execution_count": 1, "metadata": {}, "outputs": [ { "name": "stderr", "output_type": "stream", "text": [ "WARNING (pytensor.tensor.blas): Using NumPy C-API based implementation for BLAS functions.\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "Running on PyMC v0+untagged.9358.g8ea092d\n" ] } ], "source": [ "import time\n", "\n", "import arviz as az\n", "import matplotlib.pyplot as plt\n", "import numpy as np\n", "import pandas as pd\n", "import pymc as pm\n", "import scipy.stats as st\n", "\n", "print(f\"Running on PyMC v{pm.__version__}\")" ] }, { "cell_type": "code", "execution_count": 2, "metadata": {}, "outputs": [], "source": [ "az.style.use(\"arviz-darkgrid\")\n", "rng = np.random.default_rng(1234)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Background\n", "For continuous variables, the default PyMC sampler (NUTS) requires that gradients are computed, which PyMC does through autodifferentiation. However, in some cases, a PyMC model may not be supplied with gradients (for example, by evaluating a numerical model outside of PyMC) and an alternative sampler is necessary. Differential evolution (DE) Metropolis samplers are an efficient choice for gradient-free inference. This notebook compares the DEMetropolis and the DEMetropolisZ samplers in PyMC to help determine which is a better option for a given problem. \n", "\n", "The samplers are based on {cite:t}terBraak2006markov and {cite:t}terBraak2008differential and are described in the notebook [DEMetropolis(Z) Sampler Tuning](DEMetropolisZ_sampler_tuning). The idea behind differential evolution is to use randomly selected draws from other chains (DEMetropolis), or from past draws of the current chain (DEMetropolis(Z)), to make more educated proposals, thus improving sampling efficiency over the standard Metropolis implementation. Note that the PyMC implementation of DEMetropolisZ is slightly different than in {cite:t}terBraak2008differential, namely, each DEMetropolisZ chain only looks into its own history, whereas the {cite:t}terBraak2008differential algorithm has some mixing across chains.\n", "\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "In this notebook, 10 and 50-dimensional multivariate normal target densities will be sampled with both DEMetropolis and DEMetropolisZ samplers. Samplers will be evaluated based on effective sample size, sampling time and MCMC chain correlation $(\\hat{R})$. Samplers will also be compared to NUTS for benchmarking. Finally, MCMC traces will be compared to the analytically calculated target probability densities to assess potential bias in high dimensions. " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Key Take-Aways (TL;DR)\n", "Based on the results in this notebook, use DEMetropolisZ for lower dimensional problems ($\\approx10D$), and DEMetropolis for higher dimensional problems ($\\approx50D$)\n", "* The DEMetropolisZ sampler was more efficient (ESS per second sampling) than DEMetropolis.\n", "* The DEMetropolisZ sampler had better chain convergence $(\\hat{R})$ than DEMetropolis.\n", "* Bias was evident in the DEMetropolisZ sampler at 50 dimensions, resulting in reduced variance compared to the target distribution. DEMetropolis more accurately sampled the high dimensional target distribution, using $2D$ chains (twice the number of model parameters). \n", "* As expected, NUTS was more efficient and accurate than either Metropolis-based algorithms. " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Helper Functions\n", "This section defines helper functions that will be used throughout the notebook. " ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "### D-dimensional MvNormal Target Distribution and PyMC Model\n", "gen_mvnormal_params generates the parameters for the target distribution, which is a multivariate normal distribution with $\\sigma^2$ = [1, 2, 3, 4, 5] in the first five dimensions and some correlation thrown in." ] }, { "cell_type": "code", "execution_count": 3, "metadata": { "tags": [ "hide-input" ] }, "outputs": [], "source": [ "def gen_mvnormal_params(D):\n", " # means=zero\n", " mu = np.zeros(D)\n", " # sigma**2 = 1 to start\n", " cov = np.eye(D)\n", " # manually adjust the first 5 dimensions\n", " # sigma**2 in the first 5 dimensions = 1, 2, 3, 4, 5\n", " # with a little covariance added\n", " cov[:5, :5] = np.array(\n", " [\n", " [1, 0.5, 0, 0, 0],\n", " [0.5, 2, 2, 0, 0],\n", " [0, 2, 3, 0, 0],\n", " [0, 0, 0, 4, 4],\n", " [0, 0, 0, 4, 5],\n", " ]\n", " )\n", " return mu, cov" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "make_model accepts the multivariate normal parameters mu and cov and outputs a PyMC model. " ] }, { "cell_type": "code", "execution_count": 4, "metadata": { "tags": [ "hide-input" ] }, "outputs": [], "source": [ "def make_model(mu, cov):\n", " with pm.Model() as model:\n", " x = pm.MvNormal(\"x\", mu=mu, cov=cov, shape=(len(mu),))\n", " return model" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Sampling\n", "sample_model performs MCMC, returns the trace and the sampling duration. " ] }, { "cell_type": "code", "execution_count": 5, "metadata": { "tags": [ "hide-input" ] }, "outputs": [], "source": [ "def sample_model(\n", " model, D, run=0, step_class=pm.DEMetropolis, cores=1, chains=1, step_kwargs={}, sample_kwargs={}\n", "):\n", " # sampler name\n", " sampler = step_class.name\n", " # sample model\n", "\n", " # if nuts then do not provide step method\n", " if sampler == \"nuts\":\n", " with model:\n", " step = step_class(**step_kwargs)\n", " t_start = time.time()\n", " idata = pm.sample(\n", " # step=step,\n", " chains=chains,\n", " cores=cores,\n", " initvals={\"x\": [0] * D},\n", " discard_tuned_samples=False,\n", " progressbar=False,\n", " random_seed=2020 + run,\n", " **sample_kwargs\n", " )\n", " t = time.time() - t_start\n", "\n", " # signature for DEMetropolis samplers\n", " else:\n", " with model:\n", " step = step_class(**step_kwargs)\n", " t_start = time.time()\n", " idata = pm.sample(\n", " step=step,\n", " chains=chains,\n", " cores=cores,\n", " initvals={\"x\": [0] * D},\n", " discard_tuned_samples=False,\n", " progressbar=False,\n", " random_seed=2020 + run,\n", " **sample_kwargs\n", " )\n", " t = time.time() - t_start\n", "\n", " return idata, t" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "calc_mean_ess calculates the mean ess for the dimensions of the distribution." ] }, { "cell_type": "code", "execution_count": 6, "metadata": { "tags": [ "hide-input" ] }, "outputs": [], "source": [ "def calc_mean_ess(idata):\n", " return az.ess(idata).x.values.mean()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "calc_mean_rhat calculates the mean $\\hat{R}$ for the dimensions of the distribution." ] }, { "cell_type": "code", "execution_count": 7, "metadata": { "tags": [ "hide-input" ] }, "outputs": [], "source": [ "def calc_mean_rhat(idata):\n", " return az.rhat(idata).x.values.mean()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "sample_model_calc_metrics wraps the previously defined functions: samples the model, calculates the metrics and packages the results in a Pandas DataFrame" ] }, { "cell_type": "code", "execution_count": 8, "metadata": { "tags": [ "hide-input" ] }, "outputs": [], "source": [ "def sample_model_calc_metrics(\n", " sampler,\n", " D,\n", " tune,\n", " draws,\n", " cores=1,\n", " chains=1,\n", " run=0,\n", " step_kwargs=dict(proposal_dist=pm.NormalProposal, tune=\"scaling\"),\n", " sample_kwargs={},\n", "):\n", " mu, cov = gen_mvnormal_params(D)\n", " model = make_model(mu, cov)\n", " idata, t = sample_model(\n", " model,\n", " D,\n", " step_class=sampler,\n", " cores=cores,\n", " chains=chains,\n", " run=run,\n", " step_kwargs=step_kwargs,\n", " sample_kwargs=dict(sample_kwargs, **dict(tune=tune, draws=draws)),\n", " )\n", " ess = calc_mean_ess(idata)\n", " rhat = calc_mean_rhat(idata)\n", " results = dict(\n", " Sampler=sampler.__name__,\n", " D=D,\n", " Chains=chains,\n", " Cores=cores,\n", " tune=tune,\n", " draws=draws,\n", " ESS=ess,\n", " Time_sec=t,\n", " ESSperSec=ess / t,\n", " rhat=rhat,\n", " Trace=[idata],\n", " )\n", " return pd.DataFrame(results)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "concat_results concatenates the results and does a some data wrangling and calculating. " ] }, { "cell_type": "code", "execution_count": 9, "metadata": { "tags": [ "hide-input" ] }, "outputs": [], "source": [ "def concat_results(results):\n", " results_df = pd.concat(results)\n", "\n", " results_df[\"Run\"] = results_df.Sampler + \"\\nChains=\" + results_df.Chains.astype(str)\n", "\n", " results_df[\"ESS_pct\"] = results_df.ESS * 100 / (results_df.Chains * results_df.draws)\n", " return results_df" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Plotting" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "plot_comparison_bars plots the ESS and $\\hat{R}$ results for comparison. " ] }, { "cell_type": "code", "execution_count": 10, "metadata": { "tags": [ "hide-input" ] }, "outputs": [], "source": [ "def plot_comparison_bars(results_df):\n", " fig, axes = plt.subplots(1, 3, figsize=(10, 5))\n", " ax = axes[0]\n", " results_df.plot.bar(y=\"ESSperSec\", x=\"Run\", ax=ax, legend=False)\n", " ax.set_title(\"ESS per Second\")\n", " ax.set_xlabel(\"\")\n", " labels = ax.get_xticklabels()\n", "\n", " ax = axes[1]\n", " results_df.plot.bar(y=\"ESS_pct\", x=\"Run\", ax=ax, legend=False)\n", " ax.set_title(\"ESS Percentage\")\n", " ax.set_xlabel(\"\")\n", " labels = ax.get_xticklabels()\n", "\n", " ax = axes[2]\n", " results_df.plot.bar(y=\"rhat\", x=\"Run\", ax=ax, legend=False)\n", " ax.set_title(r\"$\\hat{R}$\")\n", " ax.set_xlabel(\"\")\n", " ax.set_ylim(1)\n", " labels = ax.get_xticklabels()\n", "\n", " plt.suptitle(f\"Comparison of Runs for {D} Dimensional Target Distribution\", fontsize=16)\n", " plt.tight_layout()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "plot_forest_compare_analytical plots the MCMC results for the first 5 dimensions and compares to the analytically calculated probability density. " ] }, { "cell_type": "code", "execution_count": 11, "metadata": { "tags": [ "hide-input" ] }, "outputs": [], "source": [ "def plot_forest_compare_analytical(results_df):\n", " # extract the first 5 dimensions\n", " summaries = []\n", " truncated_traces = []\n", " dimensions = 5\n", " for row in results_df.index:\n", " truncated_trace = results_df.Trace.loc[row].posterior.x[:, :, :dimensions]\n", " truncated_traces.append(truncated_trace)\n", " summary = az.summary(truncated_trace)\n", " summary[\"Run\"] = results_df.at[row, \"Run\"]\n", " summaries.append(summary)\n", " summaries = pd.concat(summaries)\n", "\n", " # plot forest\n", " axes = az.plot_forest(\n", " truncated_traces, combined=True, figsize=(8, 3), model_names=results_df.Run\n", " )\n", " ax = axes[0]\n", "\n", " # plot analytical solution\n", " yticklabels = ax.get_yticklabels()\n", " yticklocs = [tick.__dict__[\"_y\"] for tick in yticklabels]\n", " min, max = axes[0].get_ylim()\n", " width = (max - min) / 6\n", " mins = [ytickloc - (width / 2) for ytickloc in yticklocs]\n", " maxes = [ytickloc + (width / 2) for ytickloc in yticklocs]\n", " sigmas = [np.sqrt(sigma2) for sigma2 in range(1, 6)]\n", " for i, (sigma, min, max) in enumerate(zip(sigmas, mins[::-1], maxes[::-1])):\n", " # scipy.stats.norm to calculate analytical marginal distribution\n", " dist = st.norm(0, sigma)\n", " ax.vlines(dist.ppf(0.03), min, max, color=\"black\", linestyle=\":\")\n", " ax.vlines(dist.ppf(0.97), min, max, color=\"black\", linestyle=\":\")\n", " ax.vlines(dist.ppf(0.25), min, max, color=\"black\", linestyle=\":\")\n", " ax.vlines(dist.ppf(0.75), min, max, color=\"black\", linestyle=\":\")\n", " if i == 0:\n", " ax.text(dist.ppf(0.97) + 0.2, min, \"Analytical Solutions\\n(Dotted)\", fontsize=8)\n", "\n", " # legend\n", " labels = ax.get_legend().__dict__[\"texts\"]\n", " labels = [label.__dict__[\"_text\"] for label in labels]\n", " handles = ax.get_legend().__dict__[\"legendHandles\"]\n", " ax.legend(\n", " handles[::-1],\n", " labels[::-1],\n", " loc=\"center left\",\n", " bbox_to_anchor=(1, 0.5),\n", " fontsize=\"medium\",\n", " fancybox=True,\n", " title=\"94% and 50% HDI\",\n", " )\n", " ax.set_title(\n", " f\"Comparison of MCMC Samples and Analytical Solutions\\nFirst 5 Dimensions of {D} Dimensional Target Distribution\"\n", " )" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "plot_forest_compare_analytical_dim5 plots the MCMC results for the fift 5 dimension and compares to the analytically calculated probability density for repeated runs for the bias check. " ] }, { "cell_type": "code", "execution_count": 12, "metadata": { "tags": [ "hide-input" ] }, "outputs": [], "source": [ "def plot_forest_compare_analytical_dim5(results_df):\n", " # extract the 5th dimension\n", " summaries = []\n", " truncated_traces = []\n", " dimension_idx = 4\n", " for row in results_df.index:\n", " truncated_trace = results_df.Trace.loc[row].posterior.x[:, :, dimension_idx]\n", " truncated_traces.append(truncated_trace)\n", " summary = az.summary(truncated_trace)\n", " summary[\"Sampler\"] = results_df.at[row, \"Sampler\"]\n", " summaries.append(summary)\n", " summaries = pd.concat(summaries)\n", " cols = [\"Sampler\", \"mean\", \"sd\", \"hdi_3%\", \"hdi_97%\", \"ess_bulk\", \"ess_tail\", \"r_hat\"]\n", " summary_means = summaries[cols].groupby(\"Sampler\").mean()\n", "\n", " # scipy.stats.norm to calculate analytical marginal distribution\n", " dist = st.norm(0, np.sqrt(5))\n", " summary_means.at[\"Analytical\", \"mean\"] = 0\n", " summary_means.at[\"Analytical\", \"sd\"] = np.sqrt(5)\n", " summary_means.at[\"Analytical\", \"hdi_3%\"] = dist.ppf(0.03)\n", " summary_means.at[\"Analytical\", \"hdi_97%\"] = dist.ppf(0.97)\n", "\n", " # plot forest\n", " colors = plt.rcParams[\"axes.prop_cycle\"].by_key()[\"color\"]\n", " axes = az.plot_forest(\n", " truncated_traces,\n", " combined=True,\n", " figsize=(8, 3),\n", " colors=[colors[0]] * reps + [colors[1]] * reps + [colors[2]] * reps,\n", " model_names=results_df.Sampler,\n", " )\n", " ax = axes[0]\n", "\n", " # legend\n", " labels = ax.get_legend().__dict__[\"texts\"]\n", " labels = [label.__dict__[\"_text\"] for label in labels]\n", " handles = ax.get_legend().__dict__[\"legendHandles\"]\n", " labels = [labels[reps - 1]] + [labels[reps * 2 - 1]] + [labels[reps * 3 - 1]]\n", " handles = [handles[reps - 1]] + [handles[reps * 2 - 1]] + [handles[reps * 3 - 1]]\n", " ax.legend(\n", " handles[::-1],\n", " labels[::-1],\n", " loc=\"center left\",\n", " bbox_to_anchor=(1, 0.5),\n", " fontsize=\"medium\",\n", " fancybox=True,\n", " title=\"94% and 50% HDI\",\n", " )\n", " ax.set_title(\n", " f\"Comparison of MCMC Samples and Analytical Solutions\\n5th Dimension of {D} Dimensional Target Distribution\"\n", " )\n", "\n", " # plot analytical solution as vlines\n", " ax.axvline(dist.ppf(0.03), color=\"black\", linestyle=\":\")\n", " ax.axvline(dist.ppf(0.97), color=\"black\", linestyle=\":\")\n", " ax.text(dist.ppf(0.97) + 0.1, 0, \"Analytical Solution\\n(Dotted)\", fontsize=8)\n", " return summaries, summary_means" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Experiment #1. 10-Dimensional Target Distribution" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "All traces are sampled with cores=1. Surprisingly, sampling was slower using multiple cores rather than one core for both samplers for the same number of total samples.\n", "\n", "DEMetropolisZ and NUTS are sampled with four chains, and DEMetropolis are sampled with more based on {cite:t}terBraak2008differential. DEMetropolis requires that, at a minimum, $N$ chains are larger than $D$ dimensions. However, {cite:t}terBraak2008differential recommends that \$2D\n", "\n", "\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "
MCMC Runs for 10-Dimensional Experiment
samplertunedrawschainscores
0DEMetropolisZ500005000041
1DEMetropolis2000020000101
2DEMetropolis1000010000201
3DEMetropolis66666666301
4nuts2000200041
\n" ], "text/plain": [ "" ] }, "execution_count": 13, "metadata": {}, "output_type": "execute_result" } ], "source": [ "# dimensions\n", "D = 10\n", "# total samples are constant for Metropolis algorithms\n", "total_samples = 200000\n", "samplers = [pm.DEMetropolisZ] + [pm.DEMetropolis] * 3 + [pm.NUTS]\n", "coreses = [1] * 5\n", "chainses = [4, 1 * D, 2 * D, 3 * D, 4]\n", "# calculate the number of tunes and draws for each run\n", "tunes = drawses = [int(total_samples / chains) for chains in chainses]\n", "# manually adjust NUTs, which needs fewer samples\n", "tunes[-1] = drawses[-1] = 2000\n", "# put it in a dataframe for display and QA/QC\n", "pd.DataFrame(\n", " dict(\n", " sampler=[s.name for s in samplers],\n", " tune=tunes,\n", " draws=drawses,\n", " chains=chainses,\n", " cores=coreses,\n", " )\n", ").style.set_caption(\"MCMC Runs for 10-Dimensional Experiment\")" ] }, { "cell_type": "code", "execution_count": 14, "metadata": { "tags": [ "hide-output" ] }, "outputs": [ { "name": "stderr", "output_type": "stream", "text": [ "Sequential sampling (4 chains in 1 job)\n", "DEMetropolisZ: [x]\n", "Sampling 4 chains for 50_000 tune and 50_000 draw iterations (200_000 + 200_000 draws total) took 123 seconds.\n", "Population sampling (10 chains)\n", "DEMetropolis: [x]\n", "C:\\Users\\greg\\Documents\\CodingProjects_ongoing\\pymc\\pymc\\pymc\\sampling\\population.py:84: UserWarning: DEMetropolis should be used with more chains than dimensions! (The model has 10 dimensions.)\n", " warn_population_size(\n", "Chains are not parallelized. You can enable this by passing pm.sample(cores=n), where n > 1.\n", "Sampling 10 chains for 20_000 tune and 20_000 draw iterations (200_000 + 200_000 draws total) took 142 seconds.\n", "The rhat statistic is larger than 1.01 for some parameters. This indicates problems during sampling. See https://arxiv.org/abs/1903.08008 for details\n", "The effective sample size per chain is smaller than 100 for some parameters. A higher number is needed for reliable rhat and ess computation. See https://arxiv.org/abs/1903.08008 for details\n", "Population sampling (20 chains)\n", "DEMetropolis: [x]\n", "Chains are not parallelized. You can enable this by passing pm.sample(cores=n), where n > 1.\n", "Sampling 20 chains for 10_000 tune and 10_000 draw iterations (200_000 + 200_000 draws total) took 147 seconds.\n", "Population sampling (30 chains)\n", "DEMetropolis: [x]\n", "Chains are not parallelized. You can enable this by passing pm.sample(cores=n), where n > 1.\n", "Sampling 30 chains for 6_666 tune and 6_666 draw iterations (199_980 + 199_980 draws total) took 153 seconds.\n", "Auto-assigning NUTS sampler...\n", "Initializing NUTS using jitter+adapt_diag...\n", "Sequential sampling (4 chains in 1 job)\n", "NUTS: [x]\n", "Sampling 4 chains for 2_000 tune and 2_000 draw iterations (8_000 + 8_000 draws total) took 59 seconds.\n", "Chain \n", "array(0)\n", "Coordinates:\n", " chain int32 0 reached the maximum tree depth. Increase max_treedepth, increase target_accept or reparameterize.\n", "Chain \n", "array(1)\n", "Coordinates:\n", " chain int32 1 reached the maximum tree depth. Increase max_treedepth, increase target_accept or reparameterize.\n", "Chain \n", "array(2)\n", "Coordinates:\n", " chain int32 2 reached the maximum tree depth. Increase max_treedepth, increase target_accept or reparameterize.\n", "Chain \n", "array(3)\n", "Coordinates:\n", " chain int32 3 reached the maximum tree depth. Increase max_treedepth, increase target_accept or reparameterize.\n" ] } ], "source": [ "results = []\n", "run = 0\n", "for sampler, tune, draws, cores, chains in zip(samplers, tunes, drawses, coreses, chainses):\n", " if sampler.name == \"nuts\":\n", " results.append(\n", " sample_model_calc_metrics(\n", " sampler, D, tune, draws, cores=cores, chains=chains, run=run, step_kwargs={}\n", " )\n", " )\n", " else:\n", " results.append(\n", " sample_model_calc_metrics(sampler, D, tune, draws, cores=cores, chains=chains, run=run)\n", " )\n", " run += 1" ] }, { "cell_type": "code", "execution_count": 15, "metadata": {}, "outputs": [ { "data": { "text/html": [ "\n", "\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "
Results of MCMC Sampling of 10-Dimensional Target Distribution
SamplerDChainsCorestunedrawsESSTime_secESSperSecrhatESS_pct
0DEMetropolisZ104150000500006296.480000127.65000049.3300001.0000003.150000
1DEMetropolis1010120000200003492.280000147.46000023.6800001.0000001.750000
2DEMetropolis1020110000100005537.930000156.31000035.4300001.0000002.770000
3DEMetropolis10301666666665657.900000166.25000034.0300001.0100002.830000
4NUTS1041200020007731.26000072.360000106.8500001.00000096.640000