您如何估计测试数据分类器的性能? [英] How do you estimate the performance of a classifier on test data?

查看:55
本文介绍了您如何估计测试数据分类器的性能?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在使用scikit进行监督分类器,目前正在对其进行调整,以使标签数据具有良好的准确性.但是我如何估计它在测试数据上的表现(未标记)呢?

I'm using scikit to make a supervised classifier and I am currently tuning it to give me good accuracy on the labeled data. But how do I estimate how well it does on the test data (unlabeled)?

此外,如何确定我是否开始过度适合分类器?

Also, how do I find out if I'm starting to overfit the classifier?

推荐答案

您无法根据未标记的数据对方法进行评分,因为您需要知道正确的答案.为了评估一种方法,您应该将训练集划分为(新的)训练和测试(通过 sklearn.cross_validation.train_test_split ).然后将模型拟合到火车上并在测试中对其评分.如果您没有很多数据并且保留其中一些数据可能会对算法的性能产生负面影响,请使用

You can't score your method on unlabeled data because you need to know right answers. In order to evaluate a method you should split your trainset into (new) train and test (via sklearn.cross_validation.train_test_split, for example). Then fit the model to the train and score it on test. If you don't have a lot of data and holding out some of it may negatively impact performance of an algorithm, use cross validation.

由于过度拟合不能一概而论,因此低测试分数是一个很好的指标.

Since overfitting is inability to generalize, low test scores is a good indicator of it.

有关更多理论和其他方法,请参见本文

For more theory and some other approaches, take a look at this article.

这篇关于您如何估计测试数据分类器的性能?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆