如何在 Keras 中正确实现自定义活动正则化程序? [英] How do I correctly implement a custom activity regularizer in Keras?
问题描述
我正在尝试根据 Andrew Ng 的讲义实现稀疏自动编码器,如下所示 here一>.它要求通过引入惩罚项(K-L 散度)在自动编码器层上应用稀疏约束.我尝试使用 here 提供的方向来实现这一点,经过一些微小的变化.这是由 SparseActivityRegularizer 类实现的 K-L 散度和稀疏惩罚项,如下所示.
I am trying to implement sparse autoencoders according to Andrew Ng's lecture notes as shown here. It requires that a sparsity constraint be applied on an autoencoder layer by introducing a penalty term (K-L divergence). I tried to implement this using the direction provided here, after some minor changes. Here is the K-L divergence and the sparsity penalty term implemented by the SparseActivityRegularizer class as shown below.
def kl_divergence(p, p_hat):
return (p * K.log(p / p_hat)) + ((1-p) * K.log((1-p) / (1-p_hat)))
class SparseActivityRegularizer(Regularizer):
sparsityBeta = None
def __init__(self, l1=0., l2=0., p=-0.9, sparsityBeta=0.1):
self.p = p
self.sparsityBeta = sparsityBeta
def set_layer(self, layer):
self.layer = layer
def __call__(self, loss):
#p_hat needs to be the average activation of the units in the hidden layer.
p_hat = T.sum(T.mean(self.layer.get_output(True) , axis=0))
loss += self.sparsityBeta * kl_divergence(self.p, p_hat)
return loss
def get_config(self):
return {"name": self.__class__.__name__,
"p": self.l1}
模型是这样构建的
X_train = np.load('X_train.npy')
X_test = np.load('X_test.npy')
autoencoder = Sequential()
encoder = containers.Sequential([Dense(250, input_dim=576, init='glorot_uniform', activation='tanh',
activity_regularizer=SparseActivityRegularizer(p=-0.9, sparsityBeta=0.1))])
decoder = containers.Sequential([Dense(576, input_dim=250)])
autoencoder.add(AutoEncoder(encoder=encoder, decoder=decoder, output_reconstruction=True))
autoencoder.layers[0].build()
autoencoder.compile(loss='mse', optimizer=SGD(lr=0.001, momentum=0.9, nesterov=True))
loss = autoencoder.fit(X_train_tmp, X_train_tmp, nb_epoch=200, batch_size=800, verbose=True, show_accuracy=True, validation_split = 0.3)
autoencoder.save_weights('SparseAutoEncoder.h5',overwrite = True)
result = autoencoder.predict(X_test)
当我调用 fit() 函数时,我得到负的损失值,并且输出根本不像输入.我想知道我哪里出错了.计算层的平均激活并使用此自定义稀疏正则化器的正确方法是什么?任何形式的帮助将不胜感激.谢谢!
When I call the fit() function I get negative loss values and the output does not resemble the input at all. I want to know where I am going wrong. What is the correct way to calculate the average activation of a layer and use this custom sparsity regularizer? Any sort of help will be greatly appreciated. Thanks!
我使用的是 Keras 0.3.1 和 Python 2.7,因为最新的 Keras (1.0.1) 版本没有自动编码器层.
I am using Keras 0.3.1 with Python 2.7 as the latest Keras (1.0.1) build does not have the Autoencoder layer.
推荐答案
我更正了一些错误:
class SparseRegularizer(keras.regularizers.Regularizer):
def __init__(self, rho = 0.01,beta = 1):
"""
rho : Desired average activation of the hidden units
beta : Weight of sparsity penalty term
"""
self.rho = rho
self.beta = beta
def __call__(self, activation):
rho = self.rho
beta = self.beta
# sigmoid because we need the probability distributions
activation = tf.nn.sigmoid(activation)
# average over the batch samples
rho_bar = K.mean(activation, axis=0)
# Avoid division by 0
rho_bar = K.maximum(rho_bar,1e-10)
KLs = rho*K.log(rho/rho_bar) + (1-rho)*K.log((1-rho)/(1-rho_bar))
return beta * K.sum(KLs) # sum over the layer units
def get_config(self):
return {
'rho': self.rho,
'beta': self.beta
}
这篇关于如何在 Keras 中正确实现自定义活动正则化程序?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!