pymc.pytensorf.join_nonshared_inputs#

pymc.pytensorf.join_nonshared_inputs(point, outputs, inputs, shared_inputs=None, make_inputs_shared=False)[source]#

Create new outputs and input TensorVariables where the non-shared inputs are joined in a single raveled vector input.

Parameters:
pointdict of {strarray_like}

Dictionary that maps each input variable name to a numerical variable. The values are used to extract the shape of each input variable to establish a correct mapping between joined and original inputs. The shape of each variable is assumed to be fixed.

outputslist of TensorVariable

List of output TensorVariables whose non-shared inputs will be replaced by a joined vector input.

inputslist of TensorVariable

List of input TensorVariables which will be replaced by a joined vector input.

shared_inputsdict of {TensorVariableTensorSharedVariable}, optional

Dict of TensorVariable and their associated TensorSharedVariable in subgraph replacement.

make_inputs_sharedbool, default False

Whether to make the joined vector input a shared variable.

Returns:
new_outputslist of TensorVariable

List of new outputs outputs TensorVariables that depend on joined_inputs and new shared variables as inputs.

joined_inputsTensorVariable

Joined input vector TensorVariable for the new_outputs

Examples

Join the inputs of a simple PyTensor graph.

import pytensor.tensor as pt
import numpy as np

from pymc.pytensorf import join_nonshared_inputs

# Original non-shared inputs
x = pt.scalar("x")
y = pt.vector("y")
# Original output
out = x + y
print(out.eval({x: np.array(1), y: np.array([1, 2, 3])})) # [2, 3, 4]

# New output and inputs
[new_out], joined_inputs = join_nonshared_inputs(
    point={ # Only shapes matter
        "x": np.zeros(()),
        "y": np.zeros(3),
    },
    outputs=[out],
    inputs=[x, y],
)
print(new_out.eval({
    joined_inputs: np.array([1, 1, 2, 3]),
})) # [2, 3, 4]

Join the input value variables of a model logp.

import pymc as pm

with pm.Model() as model:
    mu_pop = pm.Normal("mu_pop")
    sigma_pop = pm.HalfNormal("sigma_pop")
    mu = pm.Normal("mu", mu_pop, sigma_pop, shape=(3, ))

    y = pm.Normal("y", mu, 1.0, observed=[0, 1, 2])

print(model.compile_logp()({
    "mu_pop": 0,
    "sigma_pop_log__": 1,
    "mu": [0, 1, 2],
})) # -12.691227342634292

initial_point = model.initial_point()
inputs = model.value_vars

[logp], joined_inputs = join_nonshared_inputs(
    point=initial_point,
    outputs=[model.logp()],
    inputs=inputs,
)

print(logp.eval({
    joined_inputs: [0, 1, 0, 1, 2],
})) # -12.691227342634292

Same as above but with the mu_pop value variable being shared.

from pytensor import shared

mu_pop_input, *other_inputs = inputs
shared_mu_pop_input = shared(0.0)

[logp], other_joined_inputs = join_nonshared_inputs(
    point=initial_point,
    outputs=[model.logp()],
    inputs=other_inputs,
    shared_inputs={
        mu_pop_input: shared_mu_pop_input
    },
)

print(logp.eval({
    other_joined_inputs: [1, 0, 1, 2],
})) # -12.691227342634292