Marginalizing a factor on tensorflow probability


I have a joint probability distribution that is defined like this:

import tensorflow as tf
import tensorflow_probability as tfp

tfd = tfp.distributions

def model():
    s1 = yield tfd.JointDistributionCoroutine.Root(
        tfd.Normal(3, 1, name='s1'))
    s2 = yield tfd.JointDistributionCoroutine.Root(
        tfd.Normal(0, 10, name='s2'))
    c1 = yield tfd.Normal(s1 + s2, 1, name='c1')
    c2 = yield tfd.Normal(s1 - s2, 2, name='c2')
    f = yield tfd.Deterministic(tf.math.maximum(c1, c2), name='f')
joint = tfd.JointDistributionCoroutine(model)

Now I want to marginalize it over the factor s2 but I’m not finding a good way of doing it. I found this on the documentation but I didn’t understand how I would go about using this function. Any idea on how I could do such a thing?


In short, there is no automatic solution (in TFP). Marginalization is hard in general (sometimes intractable), and we have not invested a great deal of effort into automating it in the cases where it is, in principle, possible. For this example you can probably do it by hand, which is probably the best approach. In cases where you can’t do it by hand, some sort of Monte Carlo approach is probably the next best thing. IIRC the module you linked to is specifically about marginalizing discrete variables, which in some cases can be done cleverly while avoiding combinatorial explosions.

Answered By – Chris Suter

This Answer collected from stackoverflow, is licensed under cc by-sa 2.5 , cc by-sa 3.0 and cc by-sa 4.0

Leave a Reply

(*) Required, Your email will not be published