如何规范损失函数? [英] How to regularize loss function?

查看:38
本文介绍了如何规范损失函数?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在学习张量流,并且在理解如何正则化成本函数方面遇到一些麻烦.我看了看,发现很多不同的答案.有人可以告诉我如何对成本函数进行正则化吗?

I'm learning tensorflow and I'm having some trouble understanding how to regularize the cost function. I've looked and I'm finding a lot of different answers. Could someone please tell me how to regularize the cost function?

我在Coursera上参加了Andrew Ng的机器学习课程,当我在论坛上看时似乎有一件事与众不同.似乎大多数人都将每个权重以及最终成本函数进行正则化,但是在此过程中没有提及这一点.哪个是正确的?

I took Andrew Ng's machine learning course on Coursera, and there is one thing that seems to be different when I look on forums. It seems like most people regularize each weight as well as regularizing the final cost function, and on the course there is no mention of that. Which one is correct?

推荐答案

在带有正则化参数 lambda _ TensorFlow L2(Tikhonov)中,正则化可以这样写:

In TensorFlowL2 (Tikhonov) regularization with regularization parameter lambda_could be written like this:

# Assuming you defined a graph, placeholders and logits layer.
# Using cross entropy loss:
lambda_ = 0.1
xentropy = tf.nn.softmax_cross_entropy_with_logits_v2(labels=y, logits=logits)
ys = tf.reduce_mean(xentropy)
l2_norms = [tf.nn.l2_loss(v) for v in tf.trainable_variables()]
l2_norm = tf.reduce_sum(l2_norms)
cost = ys + lambda_*l2_norm
# from here, define optimizer, train operation and train ... :-)

这篇关于如何规范损失函数?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆