如何在pymc3中的跟踪上计算GP的对数后验 [英] how to calculate log posterior of a GP over a trace in pymc3
问题描述
假设我在 X_0
处有一个观察值 y_0
,我想用带有超参数 theta
的高斯过程对其进行建模.假设我然后通过对后验进行分层采样来确定超参数 theta
中的分布.
Suppose I have an observation y_0
at X_0
which I'd like to model with a Gaussian process with hyper params theta
. Suppose I then determine a distribution in the hyper params theta
by hierarchically sampling the posterior.
现在,我想评估另一个观察的对数后验概率,例如 y_1
在 X_1
,在超参数分布上取平均值,<代码>E_theta [ log P(y_1 | y_0, X_0, X_1, theta) ]理想情况下,我会从 theta
中的后验中提取并计算 log P(y_1 | y_0, X_0, X_1, theta)
然后取几何平均值.
Now, I'd like to evaluate the log posterior probability of another observation say y_1
at X_1
, averaged over the hyper param distribution,
E_theta [ log P(y_1 | y_0, X_0, X_1, theta) ]
Ideally, I'd draw from the posterior in theta
and calculate log P(y_1 | y_0, X_0, X_1, theta)
and then take the geometric mean.
假设我有一个采样的结果,例如一个跟踪:
Suppose I have a have the result of a sampling, i.e. a trace for example:
with pm.Model() as model:
...
trace = pm.sample(1000)
如何评估这些样本(或其中的一个子集)上的另一个张量?我只能使用定义为模型一部分的 pm.Deterministic
找到部分解决方案,
How do I evaluate another tensor over these samples (or a subset of them)? I can only find a partial solution using pm.Deterministic
defined as part of the model,
with model:
h = pm.Deterministic('h',thing_to_calculate_over_the_posterior)
### Make sure nothing then depends on h so it doesn't affect the sampling
trace = pm.sample(1000)
h_sampled = trace['h']
这感觉不太对.我觉得您应该能够在采样后对跟踪子集的任何内容进行评估.
This doesn't feel right exactly. I feel like you should be able to evaluate anything over a subset of the trace after they have been sampled.
在 pymc3 中是否有 pm.gp
的方法部分,它将创建表示 log P(y_1 | y_0 X_0 X_1 theta)
的张量,或者我必须自己创建这个这需要写出后验均值和协方差(已经在 pm.gp
中完成的事情),然后调用 cholesky decomp 等.
In pymc3 is there a method part of pm.gp
that will create the tensor representing log P(y_1 | y_0 X_0 X_1 theta)
OR must I create this myself which entails writing out the posterior mean and covariance (something already done inside pm.gp
) and then calling cholesky decomp etc.
推荐答案
我会用一个例子来回答这个用例.本质上,将采样与条件定义分开,然后使用下面方法 2 中的方法 1.
I'll answer with an example for this use case. Essentially, separate sampling from the conditional definition, then use method 1 of method 2 below.
import numpy as np
import pymc3 as pm
# Data generation
X0 = np.linspace(0, 10, 100)[:,None]
y0 = X0**(0.5) + np.exp(-X0/5)*np.sin(10*X0)
y0 += 0.1*np.random.normal(size=y0.shape)
y0 = np.squeeze(y0)
# y1
X1 = np.linspace(0, 15, 200)[:,None]
y1 = X1**(0.5) + np.exp(-X1/6)*np.sin(8*X1)
y1 = np.squeeze(y1)
# 'Solve' the inference problem
with pm.Model() as model:
l = pm.HalfNormal('l',5.)
cov_func = pm.gp.cov.ExpQuad(1, ls=l)
gp = pm.gp.Marginal(cov_func=cov_func)
y0_ = gp.marginal_likelihood('y0',X0,y0,0.1)
trace = pm.sample(100)
# Define the object P(y1 | X1 y0 X0)
with model:
y1_ = gp.conditional('y1',X1,given={'X':X0,'y':y0,'noise':0.1})
# Note the given=... is not strictly required as it's cached from above
###
# Method 1
logp = y1_.logp
logp_vals1 = []
for point in trace:
point['y1'] = y1
logp_vals1.append(logp(point))
# note this is approximately 100x faster than logp_vals1.append(y1_.logp(point))
# because logp is a property with a lot of overhead
###
# Method 2
import theano
y1_shr = theano.shared(y1)
with model:
logp = pm.Deterministic('logp', y1_.distribution.logp(y1_shr))
logp_val2 = [pm.distributions.draw_values([logp], point) for point in trace]
方法 1 在我的机器上似乎快 2-3 倍.
Method 1 appears to be 2-3 times faster on my machine.
这篇关于如何在pymc3中的跟踪上计算GP的对数后验的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!