在 TensorFlow 中实现权重约束的最佳方法是什么? [英] What is the best way to implement weight constraints in TensorFlow?

查看:54
本文介绍了在 TensorFlow 中实现权重约束的最佳方法是什么?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

假设我们有权重

x = tf.Variable(np.random.random((5,10)))
cost = ...

我们使用 GD 优化器:

And we use the GD optimizer:

upds = tf.train.GradientDescentOptimizer(lr).minimize(cost)
session.run(upds)

例如,我们如何实现权重的非负性?

How can we implement for example non-negativity on weights?

我试着剪掉它们:

upds = tf.train.GradientDescentOptimizer(lr).minimize(cost)
session.run(upds)
session.run(tf.assign(x, tf.clip_by_value(x, 0, np.infty)))

但这使我的训练速度减慢了 50 倍.

But this slows down my training by a factor of 50.

有人知道在 TensorFlow 中对权重实施此类约束的好方法吗?

Does anybody know a good way to implement such constraints on the weights in TensorFlow?

P.S.:在等效的 Theano 算法中,我有

P.S.: in the equivalent Theano algorithm, I had

T.clip(x, 0, np.infty)

它运行得很顺利.

推荐答案

您可以采用拉格朗日方法并简单地为您不想要的变量的特征添加惩罚.

You can take the Lagrangian approach and simply add a penalty for features of the variable you don't want.

例如为了鼓励 theta 为非负值,您可以将以下内容添加到优化器的目标函数中.

e.g. To encourage theta to be non-negative, you could add the following to the optimizer's objective function.

    added_loss = -tf.minimum( tf.reduce_min(theta),0)

如果任何 theta 为负,则 add2loss 将为正,否则为零.将其缩放为有意义的值留给读者作为练习.缩放太小不会施加足够的压力.太多可能会使事情变得不稳定.

If any theta are negative, then add2loss will be positive, otherwise zero. Scaling that to a meaningful value is left as an exercise to the reader. Scaling too little will not exert enough pressure. Too much may make things unstable.

这篇关于在 TensorFlow 中实现权重约束的最佳方法是什么?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆