如何在Keras中正确实现自定义活动正则化工具? [英] How do I correctly implement a custom activity regularizer in Keras?

查看:267
本文介绍了如何在Keras中正确实现自定义活动正则化工具?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在尝试根据Andrew Ng的讲义来实现稀疏自动编码器,如此处. 它要求通过引入惩罚项(K-L散度)将稀疏性约束应用于自动编码器层.我试着按照此处提供的说明实施此操作细微的变化. 这是由SparseActivityRegularizer类实现的K-L散度和稀疏惩罚项,如下所示.

I am trying to implement sparse autoencoders according to Andrew Ng's lecture notes as shown here. It requires that a sparsity constraint be applied on an autoencoder layer by introducing a penalty term (K-L divergence). I tried to implement this using the direction provided here, after some minor changes. Here is the K-L divergence and the sparsity penalty term implemented by the SparseActivityRegularizer class as shown below.

def kl_divergence(p, p_hat):
return (p * K.log(p / p_hat)) + ((1-p) * K.log((1-p) / (1-p_hat)))

class SparseActivityRegularizer(Regularizer):
sparsityBeta = None

    def __init__(self, l1=0., l2=0., p=-0.9, sparsityBeta=0.1):
        self.p = p
        self.sparsityBeta = sparsityBeta

    def set_layer(self, layer):
        self.layer = layer

    def __call__(self, loss):
        #p_hat needs to be the average activation of the units in the hidden layer.      
        p_hat = T.sum(T.mean(self.layer.get_output(True) , axis=0))

        loss += self.sparsityBeta * kl_divergence(self.p, p_hat)
        return loss

    def get_config(self):
        return {"name": self.__class__.__name__,
            "p": self.l1}

模型是这样构建的

X_train = np.load('X_train.npy')
X_test = np.load('X_test.npy')

autoencoder = Sequential()
encoder = containers.Sequential([Dense(250, input_dim=576, init='glorot_uniform', activation='tanh', 
    activity_regularizer=SparseActivityRegularizer(p=-0.9, sparsityBeta=0.1))])

decoder = containers.Sequential([Dense(576, input_dim=250)])
autoencoder.add(AutoEncoder(encoder=encoder, decoder=decoder, output_reconstruction=True))
autoencoder.layers[0].build()
autoencoder.compile(loss='mse', optimizer=SGD(lr=0.001, momentum=0.9, nesterov=True))
loss = autoencoder.fit(X_train_tmp, X_train_tmp, nb_epoch=200, batch_size=800, verbose=True, show_accuracy=True, validation_split = 0.3)
autoencoder.save_weights('SparseAutoEncoder.h5',overwrite = True)
result = autoencoder.predict(X_test)

当我调用fit()函数时,我得到了负的损耗值,并且输出与输入完全不相似.我想知道我要去哪里错了.计算层的平均激活并使用此自定义稀疏正则器的正确方法是什么?任何帮助将不胜感激.谢谢!

When I call the fit() function I get negative loss values and the output does not resemble the input at all. I want to know where I am going wrong. What is the correct way to calculate the average activation of a layer and use this custom sparsity regularizer? Any sort of help will be greatly appreciated. Thanks!

我正在将Keras 0.3.1与Python 2.7一起使用,因为最新的Keras(1.0.1)构建没有Autoencoder层.

I am using Keras 0.3.1 with Python 2.7 as the latest Keras (1.0.1) build does not have the Autoencoder layer.

推荐答案

您已定义self.p = -0.9,而不是原始海报和您所引用的讲义使用的0.05值.

You have defined self.p = -0.9 instead of the 0.05 value that both the original poster and the lecture notes you referred to are using.

这篇关于如何在Keras中正确实现自定义活动正则化工具?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆