大数据变量选择的最佳模型? [英] Best model for variable selection with big data?

查看:42
本文介绍了大数据变量选择的最佳模型?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我之前发布了一个关于一些代码的问题,但现在我意识到我应该更广泛地了解总体思路.基本上,我试图建立一个包含大约 1000 个观察值和 2000 个变量的统计模型.我想确定哪些变量对影响我的高显着性因变量的影响最大.我不打算将模型用于预测,仅用于变量选择.我的自变量是二元的,因变量是连续的.我已经尝试使用 statsmodels 和 scikit-learn 等工具进行多元线性回归和固定模型.但是,我遇到了诸如变量多于观察值之类的问题.我更喜欢用python解决这个问题,因为我有它的基本知识.然而,统计数据对我来说很新,所以我不知道最好的方向.任何帮助表示赞赏.

I posted a question earlier about some code but now I realize I should be more broad with the general idea. Basically, I'm trying to build a statistical model with about 1000 observations and 2000 variables. I would like to determine which variables are most influential in effecting my dependent variable with high significance. I don't plan to use the model for prediction, just for variable selection. My independent variables are binary and dependent variable is continuous. I've tried multiple linear regression and fixed models with tools such as statsmodels and scikit-learn. However, I have encountered issues such as having more variables than observations. I would prefer to solve the problem in python since I have basic knowledge in it. However, stats is very new to me so I don't know the best direction. Any help is appreciated.

树法

import pandas as pd
from sklearn import tree
from sklearn import preprocessing

data=pd.read_excel('data_file.xlsx')

y=data.iloc[:, -1]
X=data.iloc[:, :-1]

le=preprocessing.LabelEncoder()
y=le.fit_transform(y)

clf=tree.DecisionTreeClassifier()
clf=clf.fit(X,y)

tree.export_graphviz(clf, out_file='tree.dot')

或者如果我输出到文本文件,前几行是:

Or if I output to text file, the first few lines are:

digraph Tree {
node [shape=box] ;
0 [label="X[685] <= 0.5\ngini = 0.995\nsamples = 1097\nvalue = [2, 2, 1, 1, 1, 1, 1, 1, 1, 2, 1, 1, 1, 1\n1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 2, 1, 1, 1\n1, 1, 1, 8, 1, 1, 3, 1, 2, 1, 1, 1, 2, 1\n1, 1, 1, 2, 1, 1, 1, 1, 1, 4, 1, 1, 1, 1\n1, 1, 1, 1, 1, 1, 1, 1, 3, 2, 4, 1, 1, 1\n1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1\n1, 3, 2, 2, 1, 2, 1, 1, 2, 1, 1, 1, 2, 2\n1, 1, 1, 1, 1, 1, 30, 3, 1, 3, 1, 1, 2, 1\n1, 5, 1, 2, 1, 4, 2, 1, 1, 1, 1, 1, 1, 1\n1, 1, 2, 1, 1, 1, 3, 1, 1, 3, 1, 2, 1, 1\n1, 7, 3, 1, 1, 4, 1, 1, 1, 1, 1, 1, 1, 1\n6, 2, 1, 2, 1, 1, 1, 4, 1, 1, 1, 1, 1, 1\n1, 1, 1, 1, 1, 1, 1, 1, 3, 7, 6, 1, 1, 1\n1, 1, 3, 4, 1, 1, 1, 1, 1, 4, 1, 2, 1, 1\n1, 1, 1, 2, 1, 1, 1, 1, 2, 1, 1, 2, 1, 1\n1, 4, 1, 1, 4, 2, 1, 1, 1, 2, 1, 1, 2, 2\n11, 1, 1, 2, 1, 3, 1, 1, 1, 1, 1, 1, 12, 1\n1, 1, 3, 1, 1, 3, 1, 1, 2, 1, 1, 1, 1, 1\n6, 1, 1, 1, 1, 1, 4, 2, 1, 2, 1, 1, 1, 1\n1, 1, 1, 1, 3, 1, 1, 3, 1, 1, 1, 1, 1, 1\n1, 1, 1, 1, 1, 11, 1, 2, 1, 2, 1, 1, 1, 1\n4, 1, 1, 1, 1, 1, 2, 1, 2, 1, 1, 2, 1, 1\n1, 1, 1, 2, 1, 1, 1, 1, 1, 1, 1, 1, 1, 2\n1, 2, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 3, 3\n1, 7, 1, 1, 2, 1, 2, 7, 1, 1, 1, 3, 1, 11\n1, 1, 2, 2, 2, 1, 1, 10, 1, 1, 5, 21, 1, 1\n11, 1, 2, 1, 1, 1, 1, 1, 5, 15, 3, 1, 1, 1\n1, 1, 1, 3, 1, 1, 2, 1, 3, 1, 1, 1, 1, 1\n1, 1, 6, 1, 1, 1, 1, 1, 1, 14, 1, 1, 1, 1\n17, 1, 1, 1, 1, 1, 1, 1, 2, 3, 1, 1, 1, 4\n1, 1, 1, 6, 1, 1, 2, 1, 1, 2, 1, 1, 1, 1\n1, 1, 2, 1, 2, 1, 2, 1, 2, 1, 1, 1, 14, 1\n3, 1, 1, 1, 2, 1, 2, 2, 1, 1, 1, 1, 3, 1\n1, 2, 1, 12, 1, 1, 1, 1, 8, 2, 1, 1, 1, 2\n1, 1, 3, 1, 1, 6, 1, 1, 1, 3, 1, 1, 2, 1\n1, 1, 1, 1, 4, 1, 1, 2, 1, 3, 2, 4, 1, 3\n1, 1, 1, 1, 1, 7, 1, 1, 2, 1, 1, 2, 13, 2\n1, 1, 1, 1, 9, 1, 1, 1, 1, 1, 1, 1, 1, 1\n9, 1, 2, 5, 7, 1, 1, 1, 2, 9, 2, 2, 13, 1\n1, 1, 1, 2, 1, 3, 1, 1, 6, 1, 3, 1, 1, 3\n1, 1, 1, 1, 2, 1, 1, 2, 1, 1, 4, 1, 5, 1\n4, 1, 2, 3, 3]"] ;
1 [label="X[990] <= 0.5\ngini = 0.995\nsamples = 1040\nvalue = [2, 2, 1, 1, 1, 1, 1, 1, 1, 2, 1, 1, 1, 1\n1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 2, 1, 1, 1\n1, 1, 1, 8, 1, 1, 3, 1, 2, 1, 1, 1, 2, 1\n1, 1, 1, 2, 1, 1, 1, 1, 1, 4, 1, 1, 1, 1\n1, 1, 1, 1, 1, 1, 1, 1, 3, 2, 4, 1, 1, 1\n1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1\n1, 3, 2, 2, 1, 2, 1, 1, 2, 1, 1, 1, 2, 2\n1, 1, 1, 1, 1, 1, 30, 3, 1, 3, 1, 1, 2, 1\n1, 5, 1, 2, 1, 4, 2, 1, 1, 1, 1, 1, 1, 1\n1, 1, 2, 1, 1, 1, 3, 1, 1, 3, 1, 2, 1, 1\n1, 7, 3, 1, 1, 4, 1, 1, 1, 1, 1, 1, 1, 1\n6, 2, 1, 2, 1, 1, 1, 4, 1, 1, 1, 1, 1, 1\n1, 1, 1, 1, 1, 1, 1, 1, 3, 7, 6, 1, 1, 1\n1, 1, 3, 4, 1, 1, 1, 1, 1, 4, 1, 2, 1, 1\n1, 1, 1, 2, 1, 1, 1, 1, 2, 1, 1, 2, 1, 1\n1, 4, 1, 1, 4, 2, 1, 1, 1, 2, 1, 1, 2, 2\n11, 1, 1, 2, 1, 3, 1, 1, 1, 1, 1, 1, 12, 1\n1, 1, 3, 1, 1, 3, 1, 1, 2, 1, 1, 1, 1, 1\n6, 1, 0, 1, 1, 1, 4, 2, 1, 2, 1, 1, 1, 1\n1, 1, 1, 1, 3, 1, 1, 3, 1, 1, 1, 0, 1, 1\n1, 1, 1, 1, 1, 9, 1, 2, 1, 2, 1, 1, 1, 1\n4, 1, 1, 1, 1, 1, 2, 1, 2, 1, 1, 2, 1, 1\n1, 1, 1, 2, 1, 1, 1, 1, 1, 1, 1, 1, 1, 2\n1, 2, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 3, 3\n1, 7, 1, 1, 2, 1, 2, 7, 1, 1, 1, 1, 1, 11\n1, 1, 2, 2, 2, 1, 1, 10, 1, 1, 5, 21, 1, 1\n1, 1, 2, 1, 1, 1, 1, 1, 5, 15, 3, 1, 1, 1\n1, 1, 1, 3, 1, 1, 2, 1, 3, 1, 1, 0, 1, 1\n1, 1, 6, 1, 1, 1, 1, 1, 1, 14, 1, 1, 1, 1\n16, 1, 1, 1, 1, 1, 1, 1, 2, 3, 1, 1, 1, 4\n1, 1, 1, 6, 1, 1, 2, 1, 1, 2, 1, 1, 1, 1\n1, 1, 2, 1, 2, 1, 2, 1, 2, 1, 1, 1, 0, 1\n3, 1, 1, 1, 2, 1, 2, 2, 1, 1, 1, 1, 3, 1\n1, 2, 1, 12, 1, 1, 1, 1, 8, 2, 0, 1, 1, 2\n1, 1, 3, 1, 1, 6, 1, 1, 1, 3, 1, 1, 2, 0\n1, 1, 1, 1, 4, 1, 1, 2, 1, 3, 2, 4, 1, 3\n1, 1, 1, 1, 1, 7, 1, 1, 2, 1, 0, 1, 3, 2\n1, 1, 1, 0, 9, 1, 1, 1, 1, 1, 1, 1, 1, 1\n9, 1, 2, 5, 6, 1, 1, 1, 2, 9, 2, 2, 13, 1\n1, 1, 1, 2, 1, 3, 1, 1, 6, 1, 3, 1, 0, 3\n1, 0, 1, 1, 2, 0, 1, 2, 1, 1, 0, 1, 5, 1\n4, 1, 0, 3, 3]"] ;

推荐答案

1.Variante A: Feature Selection Scikit

对于未来的选择 scikit 提供了许多不同的方法:https://scikit-learn.org/stable/modules/feature_selection.html

For future selection scikitoffers a lot of different approaches: https://scikit-learn.org/stable/modules/feature_selection.html

这里总结了上面的评论.

Here it sumps up the comments from above.

2.Variante B:使用线性回归进行特征选择

如果您对其运行线性回归,您还可以读取您的特征重要性.https://scikit-learn.org/stable/modules/generate/sklearn.linear_model.LinearRegression.html .reg.coef_函数会给你未来的系数,绝对数越高,你的特征越重要,所以例如 0.8 是一个非常重要的未来,而 0.00001 并不重要.

You can also read your feature importance if you run linearregression on it. https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LinearRegression.html .The function reg.coef_will give you the coefiecents for your futres, the higher the absolute number is, the more important is your feature, so for exmaple 0.8 is a really important future, where 0.00001 is not important.

3.Variante C:PCA(不适用于二进制情况)

你为什么要杀死你的变量?我建议您使用:PCA - 主成分分析 https://en.wikipedia.org/wiki/Principal_component_analysis.

Why you wanna kill your variables ? I would recommend you to use: PCA - Principal ocmponent analysis https://en.wikipedia.org/wiki/Principal_component_analysis.

基本概念是将 2000 个特征转换到更小的空间(可能是 1000 个或其他),同时在数学上仍然有用.

The basic concept is to transform your 2000 features to a smaller space (maybe 1000 or whatever), while still being mathematically useful.

Scikik-learn 有一个很好的包:https://scikit-learn.org/stable/modules/generated/sklearn.decomposition.PCA.html

这篇关于大数据变量选择的最佳模型?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆