在scikit-learn中将支持向量分类器与多项式内核一起使用 [英] Using a support vector classifier with polynomial kernel in scikit-learn

查看:108
本文介绍了在scikit-learn中将支持向量分类器与多项式内核一起使用的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在尝试在scikit-learn包中实现的不同分类器,以执行一些NLP任务.我用于执行分类的代码如下

I'm experimenting with different classifiers implemented in the scikit-learn package, to do some NLP task. The code I use to perform the classification is the following

def train_classifier(self, argcands):
        # Extract the necessary features from the argument candidates
        train_argcands_feats = []
        train_argcands_target = []

        for argcand in argcands:
            train_argcands_feats.append(self.extract_features(argcand))
            train_argcands_target.append(argcand["info"]["label"]) 

        # Transform the features to the format required by the classifier
        self.feat_vectorizer = DictVectorizer()
        train_argcands_feats = self.feat_vectorizer.fit_transform(train_argcands_feats)

        # Transform the target labels to the format required by the classifier
        self.target_names = list(set(train_argcands_target))
        train_argcands_target = [self.target_names.index(target) for target in train_argcands_target]

        # Train the appropriate supervised model
        self.classifier = LinearSVC()
        #self.classifier = SVC(kernel="poly", degree=2)

        self.classifier.fit(train_argcands_feats,train_argcands_target)

        return

def execute(self, argcands_test):
        # Extract features
        test_argcands_feats = [self.extract_features(argcand) for argcand in argcands_test]

        # Transform the features to the format required by the classifier
        test_argcands_feats = self.feat_vectorizer.transform(test_argcands_feats)

        # Classify the candidate arguments 
        test_argcands_targets = self.classifier.predict(test_argcands_feats)

        # Get the correct label names
        test_argcands_labels = [self.target_names[int(label_index)] for label_index in test_argcands_targets]

        return zip(argcands_test, test_argcands_labels)

从代码中可以看到,我正在测试支持向量机分类器的两个实现:LinearSVC和带有多项式内核的SVC. 现在,对于我的问题".使用LinearSVC时,我得到的分类没有问题:测试实例带有一些标签.但是,如果我使用多项式SVC,则所有测试实例都用SAME标签标记. 我知道一个可能的解释是,简单地说,多项式SVC不是适合用于我的任务的分类器,这很好.我只想确保我正确使用了多项式SVC.

As can be seen by the code, I'm testing two implementations of a Support Vectors Machine classifier: the LinearSVC and the SVC with a polynomial kernel. Now, for my "problem". When using the LinearSVC, I get a classification with no problems: the test instances are tagged with some labels. However, if I use the polynomial SVC, ALL test instances are tagged with the SAME label. I know that one possible explanation is that, simply, the polynomial SVC is not the appropriate classifier to use for my task, and that's fine. I just want to make sure that I'm using the polynomial SVC appropriately.

感谢您能给我的所有帮助/建议.

Thanks for all the help/advice you could give me.

更新 按照答案中给出的建议,我已经更改了训练分类器执行以下操作的代码:

UPDATE Following the recommendation given in the answers, I've changed the code that trains the classifier to do the following:

# Train the appropriate supervised model
parameters = [{'C': [1, 10, 100, 1000], 'gamma': [0.001, 0.0001], 'kernel': ['poly'], 'degree': [2]}]
self.classifier = GridSearchCV(SVC(C=1), parameters, score_func = f1_score)

现在我收到以下消息:

ValueError: The least populated class in y has only 1 members, which is too few. The minimum number of labels for any class cannot be less than k=3.

这与我的训练数据中班级实例分布不均有关,对吗?还是我打错程序了?

This has something to do with the uneven distribution of class' instances in my training data, right? Or am I calling the procedure incorrectly?

推荐答案

在两种情况下,都应使用

In both cases you should tune the value of the regularization parameter C using grid search. You cannot compare the results otherwise as a good value for C for one might yield crappy results for the other model.

对于多项式内核,您还可以通过网格搜索度的最佳值(例如2或3或更大):​​在这种情况下,您应该同时对C和度进行网格搜索.

For the polynomial kernel you can also grid search the optimal value for the degree (e.g. 2 or 3 or more): in that case you should grid search both C and degree at the same time.

修改:

这与我的训练数据中班级实例分布不均有关,对吗?还是我打错程序了?

This has something to do with the uneven distribution of class' instances in my training data, right? Or am I calling the procedure incorrectly?

检查每个类至少有3个样本,以便能够与k == 3进行StratifiedKFold交叉验证(我认为这是GridSearchCV用于分类的默认CV).如果数量较少,请不要期望模型能够预测任何有用的信息.我建议每个班级至少有100个样本(作为经验值的最小分法,除非您处理的玩具问题的特征少于10个,并且班级之间的决策边界有很多规律性.)

Check that you have at least 3 samples per class to be able to do StratifiedKFold cross validation with k == 3 (I think this is the default CV used by GridSearchCV for classification). If you have less, don't expect the model to be able to predict anything useful. I would recommend at least 100 samples per class (as a somewhat arbitrary rule of thumb min bound, unless you work on toy problems with less than 10 features and a lot of regularity in the decision boundaries between classes).

顺便说一句,请始终在问题/错误报告中粘贴完整的追溯.否则,可能没有必要的信息来诊断正确的原因.

BTW, please always paste the complete traceback in questions / bug reports. Otherwise one might not have the necessary info to diagnose the right cause.

这篇关于在scikit-learn中将支持向量分类器与多项式内核一起使用的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆