Tensorflow LSTM正则化 [英] Tensorflow LSTM Regularization

查看:157
本文介绍了Tensorflow LSTM正则化的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我想知道如何在TensorFlow的LSTM中实现l1或l2正则化? TF无法让您访问LSTM的内部权重,因此我不确定如何计算标准并将其添加到损失中.我的损失函数目前仅为RMS.

I was wondering how one can implement l1 or l2 regularization within an LSTM in TensorFlow? TF doesn't give you access to the internal weights of the LSTM, so I'm not certain how one can calculate the norms and add it to the loss. My loss function is just RMS for now.

此处的答案似乎不够.

推荐答案

TL; DR;将所有参数保存在列表中,然后在进行优化梯度之前将其L ^ n范数添加到目标函数中

TL;DR; Save all the parameters in a list, and add their L^n norm to the objective function before making gradient for optimisation

1)在定义推断的函数中

1) In the function where you define the inference

net = [v for v in tf.trainable_variables()]
return *, net

2)在成本中添加L ^ n范数,并根据成本计算梯度

2) Add the L^n norm in the cost and calculate the gradient from the cost

weight_reg = tf.add_n([0.001 * tf.nn.l2_loss(var) for var in net]) #L2

cost = Your original objective w/o regulariser + weight_reg

param_gradients = tf.gradients(cost, net)

optimiser = tf.train.AdamOptimizer(0.001).apply_gradients(zip(param_gradients, net))

3)在需要时通过

_ = sess.run(optimiser, feed_dict={input_var: data})

这篇关于Tensorflow LSTM正则化的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆