在哪种情况下,交叉熵优于均方误差? [英] In which cases is the cross-entropy preferred over the mean squared error?

查看:360
本文介绍了在哪种情况下,交叉熵优于均方误差?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

虽然上述两种方法都为更好的预测接近度提供了更好的分数,但交叉熵仍然是首选.是在每种情况下还是在某些特殊情况下,我们更喜欢使用交叉熵而不是MSE?

解决方案

交叉熵优先用于分类,而均方误差是回归的最佳选择之一>.这直接来自问题本身的陈述-在分类中,您使用非常特殊的可能的输出值集,因此MSE定义不正确(因为它不具备此类知识,因此以不兼容的方式惩罚错误).为了更好地理解这种现象,最好遵循并理解它们之间的关系.

  1. 交叉熵
  2. 逻辑回归(二元交叉熵)
  3. 线性回归(MSE)

您会注意到,只要对因变量有不同的假设,两者都可以看作是最大似然估计器.

Although both of the above methods provide a better score for the better closeness of prediction, still cross-entropy is preferred. Is it in every case or there are some peculiar scenarios where we prefer cross-entropy over MSE?

解决方案

Cross-entropy is prefered for classification, while mean squared error is one of the best choices for regression. This comes directly from the statement of the problems itself - in classification you work with very particular set of possible output values thus MSE is badly defined (as it does not have this kind of knowledge thus penalizes errors in incompatible way). To better understand the phenomena it is good to follow and understand the relations between

  1. cross entropy
  2. logistic regression (binary cross entropy)
  3. linear regression (MSE)

You will notice that both can be seen as a maximum likelihood estimators, simply with different assumptions about the dependent variable.

这篇关于在哪种情况下,交叉熵优于均方误差?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆