如何使用python scikit-learn执行欠采样(正确的方法)? [英] How to perform undersampling (the right way) with python scikit-learn?

查看:1492
本文介绍了如何使用python scikit-learn执行欠采样(正确的方法)?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在尝试使用python scikit Learn对多数类进行欠采样.目前,我的代码正在寻找少数群体的N,然后尝试从多数群体中对相同的N进行低采样.结果,测试数据和训练数据都具有1:1的分布.但是我真正想要的是仅对训练数据进行1:1分配,而对测试数据中的原始分布进行测试.

I am attempting to perform undersampling of the majority class using python scikit learn. Currently my codes look for the N of the minority class and then try to undersample the exact same N from the majority class. And both the test and training data have this 1:1 distribution as a result. But what I really want is to do this 1:1 distribution on the training data ONLY but test it on the original distribution in the testing data.

我不确定如何执行后者,因为两者之间存在一些dict向量化,这使我感到困惑.

I am not quite sure how to do the latter as there is some dict vectorization in between, which makes it confusing to me.

# Perform undersampling majority group
minorityN = len(df[df.ethnicity_scan == 1]) # get the total count of low-frequency group
minority_indices = df[df.ethnicity_scan == 1].index
minority_sample = df.loc[minority_indices]

majority_indices = df[df.ethnicity_scan == 0].index
random_indices = np.random.choice(majority_indices, minorityN, replace=False) # use the low-frequency group count to randomly sample from high-frequency group
majority_sample = data.loc[random_indices]

merged_sample = pd.concat([minority_sample, majority_sample], ignore_index=True) # merging all the low-frequency group sample and the new (randomly selected) high-frequency sample together
df = merged_sample
print 'Total N after undersampling:', len(df)

# Declaring variables
X = df.raw_f1.values
X2 = df.f2.values
X3 = df.f3.values
X4 = df.f4.values
y = df.outcome.values

# Codes skipped ....
def feature_noNeighborLoc(locString):
    pass
my_dict16 = [{'location': feature_noNeighborLoc(feature_full_name(i))} for i in X4]
# Codes skipped ....

# Dict vectorization
all_dict = []
for i in range(0, len(my_dict)):
    temp_dict = dict(
        my_dict[i].items() + my_dict2[i].items() + my_dict3[i].items() + my_dict4[i].items()
        + my_dict5[i].items() + my_dict6[i].items() + my_dict7[i].items() + my_dict8[i].items()
        + my_dict9[i].items() + my_dict10[i].items()
        + my_dict11[i].items() + my_dict12[i].items() + my_dict13[i].items() + my_dict14[i].items()
        + my_dict19[i].items()
        + my_dict16[i].items() # location feature
        )
all_dict.append(temp_dict)

newX = dv.fit_transform(all_dict)

X_train, X_test, y_train, y_test = cross_validation.train_test_split(newX, y, test_size=testTrainSplit)

# Fitting X and y into model, using training data
classifierUsed2.fit(X_train, y_train)

# Making predictions using trained data
y_train_predictions = classifierUsed2.predict(X_train)
y_test_predictions = classifierUsed2.predict(X_test)

推荐答案

您希望对某个类别的训练样本进行二次采样,因为您希望使用一个分类器来对待所有标签.

You want to subsample the training samples of one of your categories because you want a classifier that treats all the labels the same.

如果要执行此操作而不是进行二次抽样,则可以将分类器的"class_weight"参数的值更改为"balanced"(对于某些分类器为"auto"),这可以完成您想要做的工作.

If you want to do that instead of subsampling you can change the value of the 'class_weight' parameter of your classifier to 'balanced' (or 'auto' for some classifiers) which does the job that you want to do.

您可以阅读LogisticRegression分类器的文档作为示例.请在此处中注意"class_weight"参数的说明.

You can read the documentation of LogisticRegression classifier as an example. Notice the description of the 'class_weight' parameter here.

通过将该参数更改为平衡",您将不再需要进行二次采样.

By changing that parameter to 'balanced' you won't need to do the subsampling anymore.

这篇关于如何使用python scikit-learn执行欠采样(正确的方法)?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆