深度学习中的训练损失和验证损失 [英] Training Loss and Validation Loss in Deep Learning

查看:742
本文介绍了深度学习中的训练损失和验证损失的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

请您指导我如何解释以下结果?

1)损失<validation_loss2)损失> validation_loss

看来训练损失总是应该小于验证损失.但是,这两种情况都是在训练模型时发生的.

解决方案

确实是机器学习中的一个基本问题.

 如果验证丢失>>训练损失,您可以称其为过拟合.如果验证损失>训练损失,您可以称其为过拟合.如果验证损失<训练损失,您可以称其为不合身.如果验证损失<<训练损失,您可以称其为不合身. 

您的目标是使验证损失尽可能小.过度拟合几乎总是一件好事.最终所有重要的事情是:验证损失要尽可能的低.

这通常发生在训练损失低很多的时候.

还要检查

Would you please guide me how to interpret the following results?

1) loss < validation_loss 2) loss > validation_loss

It seems that the training loss always should be less than validation loss. But, both of these cases happen when training a model.

解决方案

Really a fundamental question in machine learning.

If validation loss >> training loss you can call it overfitting.
If validation loss  > training loss you can call it some overfitting.
If validation loss  < training loss you can call it some underfitting.
If validation loss << training loss you can call it underfitting.

Your aim is to make the validation loss as low as possible. Some overfitting is nearly always a good thing. All that matters in the end is: is the validation loss as low as you can get it.

This often occurs when the training loss is quite a bit lower.

Also check how to prevent overfitting.

这篇关于深度学习中的训练损失和验证损失的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆