在ML分类器中编码文本 [英] Encoding text in ML classifier

查看:68
本文介绍了在ML分类器中编码文本的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在尝试构建ML模型.但是,我在理解在哪里应用编码有困难.请参阅下面的步骤和功能,以复制我一直在遵循的过程.

首先,我将数据集分为训练和测试

:

 #导入重采样程序包从sklearn.naive_bayes导入MultinomialNB导入字符串从nltk.corpus导入停用词汇入从sklearn.model_selection导入train_test_split从sklearn.feature_extraction.text导入CountVectorizer从nltk.tokenize导入RegexpTokenizer从sklearn.utils导入重新采样从sklearn.metrics导入f1_score,precision_score,recall_score,precision_score#分为训练和测试集#测试计数矢量化器X = df [['Text']]y = df ['Label']X_train,X_test,y_train,y_test = train_test_split(X,y,test_size = 0.2,random_state = 40)#返回一个数据框training_set = pd.concat([X_train,y_train],轴= 1) 

现在我应用(欠)采样:

 #分离类垃圾邮件= training_set [training_set.Label == 1]not_spam = training_set [training_set.Label == 0]#多数采样不足欠采样=重新采样(非垃圾邮件,replace = True,n_samples = len(spam),#将样本数量设置为等于少数类的数量random_state = 40)#返回新的训练集undersample_train = pd.concat([垃圾邮件,欠采样]) 

然后应用所选的算法:

  full_result = pd.DataFrame(列= [预处理",模型",精度",调用","F1分数",准确性"))X,y = BOW(undersample_train)full_result = full_result.append(training_naive(X_train,X_test,y_train,y_test,'Count Vectorize'),ignore_index = True) 

其中BOW的定义如下

  def BOW(数据):df_temp = data.copy(deep = True)df_temp = basic_preprocessing(df_temp)count_vectorizer = CountVectorizer(analyzer = fun)count_vectorizer.fit(df_temp ['Text'])list_corpus = df_temp [文本"] .tolist()list_labels = df_temp ["Label"].tolist()X = count_vectorizer.transform(list_corpus)返回X,list_labels 

basic_preprocessing 的定义如下:

  def basic_preprocessing(df):df_temp = df.copy(deep = True)df_temp = df_temp.rename(索引= str,列= {'Clean_Titles_2':'Text'})df_temp.loc [:,'Text'] = [df_temp ['Text'].values中x的text_prepare(x)#le = LabelEncoder()#le.fit(df_temp ['medical_specialty'])#df_temp.loc [:,'class_label'] = le.transform(df_temp ['medical_specialty'])tokenizer = RegexpTokenizer(r'\ w +')df_temp [令牌"] = df_temp [文本"] .apply(tokenizer.tokenize)返回df_temp 

其中 text_prepare 是:

  def text_prepare(text):REPLACE_BY_SPACE_RE = re.compile('[/(){} \ [\] \ | @ ,;]')BAD_SYMBOLS_RE = re.compile('[^ 0-9a-z#+ _]')停用词=设置(stopwords.words('english'))文字= text.lower()text = REPLACE_BY_SPACE_RE.sub('',text)#用文本中的空格替换REPLACE_BY_SPACE_RE符号text = BAD_SYMBOLS_RE.sub('',text)#从文本中删除BAD_SYMBOLS_RE中的符号单词= text.split()我= 0而我<len(字):如果停用词中的单词[i]:words.pop(i)别的:我+ = 1text =''.join(map(str,words))#从文本中删除停用词返回文字 

  def training_naive(X_train_naive,X_test_naive,y_train_naive,y_test_naive,preproc):clf = MultinomialNB()#高斯朴素贝叶斯clf.fit(X_train_naive,y_train_naive)res = pd.DataFrame(列= ['预处理','模型','精度','调用','F1得分','精度'])y_pred = clf.predict(X_test_naive)f1 = f1_score(y_pred,y_test_naive,平均值='加权')pres = precision_score(y_pred,y_test_naive,average ='weighted')rec =回忆分数(y_pred,y_test_naive,平均值='加权')acc = precision_score(y_pred,y_test_naive)res = res.append({'Preprocessing':preproc,'Model':'Naive Bayes','Precision':pres,'Recall':rec,'F1-score':f1,'Accuracy':acc},ignore_index = True)返回资源 

如您所见,顺序为:

  • 定义text_prepare以进行文本清理;
  • 定义basic_preprocessing;
  • 定义弓;
  • 将数据集拆分为训练和测试;
  • 应用采样;
  • 应用算法.

我不了解的是如何正确编码文本,以使算法正常工作.我的数据集称为df,列为:

 标签文字年1 bla bla bla 20000添加一些单词20121这只是一个示例19980不幸的是,该代码在2018年无法正常工作0我应该在哪里应用编码?20000我在这里想念什么?2005年 

应用BOW时的顺序是错误的,因为出现此错误: ValueError:无法将字符串转换为浮点数:'如果...,请期待良好的结果'

我按照步骤操作(和代码=来自此链接:

I am trying to build a ML model. However I am having difficulties in understanding where to apply the encoding. Please see below the steps and functions to replicate the process I have been following.

First I split the dataset into train and test:

# Import the resampling package
from sklearn.naive_bayes import MultinomialNB
import string
from nltk.corpus import stopwords
import re
from sklearn.model_selection import train_test_split
from sklearn.feature_extraction.text import CountVectorizer
from nltk.tokenize import RegexpTokenizer
from sklearn.utils import resample
from sklearn.metrics import f1_score, precision_score, recall_score, accuracy_score
# Split into training and test sets

# Testing Count Vectorizer

X = df[['Text']] 
y = df['Label']


X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=40)

# Returning to one dataframe
training_set = pd.concat([X_train, y_train], axis=1)

Now I apply the (under) sampling:

# Separating classes
spam = training_set[training_set.Label == 1]
not_spam = training_set[training_set.Label == 0]

# Undersampling the majority
undersample = resample(not_spam, 
                       replace=True, 
                       n_samples=len(spam), #set the number of samples to equal the number of the minority class
                       random_state=40)
# Returning to new training set
undersample_train = pd.concat([spam, undersample])

And I apply the algorithm chosen:

full_result = pd.DataFrame(columns = ['Preprocessing', 'Model', 'Precision', 'Recall', 'F1-score', 'Accuracy'])

X, y = BOW(undersample_train)
full_result = full_result.append(training_naive(X_train, X_test, y_train, y_test, 'Count Vectorize'), ignore_index = True)

where BOW is defined as follows

def BOW(data):
    
    df_temp = data.copy(deep = True)
    df_temp = basic_preprocessing(df_temp)

    count_vectorizer = CountVectorizer(analyzer=fun)
    count_vectorizer.fit(df_temp['Text'])

    list_corpus = df_temp["Text"].tolist()
    list_labels = df_temp["Label"].tolist()
    
    X = count_vectorizer.transform(list_corpus)
    
    return X, list_labels

basic_preprocessing is defined as follows:

def basic_preprocessing(df):
    
    df_temp = df.copy(deep = True)
    df_temp = df_temp.rename(index = str, columns = {'Clean_Titles_2': 'Text'})
    df_temp.loc[:, 'Text'] = [text_prepare(x) for x in df_temp['Text'].values]
    
    #le = LabelEncoder()
    #le.fit(df_temp['medical_specialty'])
    #df_temp.loc[:, 'class_label'] = le.transform(df_temp['medical_specialty'])
    
    tokenizer = RegexpTokenizer(r'\w+')
    df_temp["Tokens"] = df_temp["Text"].apply(tokenizer.tokenize)
    
    return df_temp

where text_prepare is:

def text_prepare(text):

    REPLACE_BY_SPACE_RE = re.compile('[/(){}\[\]\|@,;]')
    BAD_SYMBOLS_RE = re.compile('[^0-9a-z #+_]')
    STOPWORDS = set(stopwords.words('english'))
    
    text = text.lower()
    text = REPLACE_BY_SPACE_RE.sub('', text) # replace REPLACE_BY_SPACE_RE symbols by space in text
    text = BAD_SYMBOLS_RE.sub('', text) # delete symbols which are in BAD_SYMBOLS_RE from text
    words = text.split()
    i = 0
    while i < len(words):
        if words[i] in STOPWORDS:
            words.pop(i)
        else:
            i += 1
    text = ' '.join(map(str, words))# delete stopwords from text
    
    return text

and

def training_naive(X_train_naive, X_test_naive, y_train_naive, y_test_naive, preproc):
    
    clf = MultinomialNB() # Gaussian Naive Bayes
    clf.fit(X_train_naive, y_train_naive)

    res = pd.DataFrame(columns = ['Preprocessing', 'Model', 'Precision', 'Recall', 'F1-score', 'Accuracy'])
    
    y_pred = clf.predict(X_test_naive)
    
    f1 = f1_score(y_pred, y_test_naive, average = 'weighted')
    pres = precision_score(y_pred, y_test_naive, average = 'weighted')
    rec = recall_score(y_pred, y_test_naive, average = 'weighted')
    acc = accuracy_score(y_pred, y_test_naive)
    
    res = res.append({'Preprocessing': preproc, 'Model': 'Naive Bayes', 'Precision': pres, 
                     'Recall': rec, 'F1-score': f1, 'Accuracy': acc}, ignore_index = True)

    return res 

As you can see the order is:

  • define text_prepare for text cleaning;
  • define basic_preprocessing;
  • define BOW;
  • split the dataset into train and test;
  • apply the sampling;
  • apply the algorithm.

What I am not understanding is how to encode text correctly in order to make the algorithm working fine. My dataset is called df and columns are:

Label      Text                                 Year
1         bla bla bla                           2000
0         add some words                        2012
1         this is just an example               1998
0         unfortunately the code does not work  2018
0         where should I apply the encoding?    2000
0         What am I missing here?               2005

The order when I apply BOW is wrong as I get this error: ValueError: could not convert string to float: 'Expect a good results if ... '

I followed the steps (and code= from this link: kaggle.com/ruzarx/oversampling-smote-and-adasyn . However, the part of sampling is wrong as it should be done only to the train, so after the split. The principle should be: (1) split training/test; (2) apply resampling on the training set, so that the model is trained with balanced data; (3) apply model to test set and evaluate on it.

I will be happy to provide further information, data and/or code, but I think I have provided all the most relevant steps.

Thanks a lot.

解决方案

You need to have a test BOW function that should reuse the count vectorizer model that was built during the training phase.

Think about using pipeline for reducing the code verbosity.

from sklearn.naive_bayes import MultinomialNB
import string
from nltk.corpus import stopwords
import re
from sklearn.model_selection import train_test_split
from io import StringIO
from sklearn.feature_extraction.text import CountVectorizer
from nltk.tokenize import RegexpTokenizer
from sklearn.utils import resample
from sklearn.metrics import f1_score, precision_score, recall_score, accuracy_score

def fun(text):
    remove_punc = [c for c in text if c not in string.punctuation]
    remove_punc = ''.join(remove_punc)
    cleaned = [w for w in remove_punc.split() if w.lower()
               not in stopwords.words('english')]
    return cleaned
# Testing Count Vectorizer

def BOW(data):

    df_temp = data.copy(deep=True)
    df_temp = basic_preprocessing(df_temp)

    count_vectorizer = CountVectorizer(analyzer=fun)
    count_vectorizer.fit(df_temp['Text'])

    list_corpus = df_temp["Text"].tolist()
    list_labels = df_temp["Label"].tolist()

    X = count_vectorizer.transform(list_corpus)

    return X, list_labels, count_vectorizer

def test_BOW(data, count_vectorizer):

    df_temp = data.copy(deep=True)
    df_temp = basic_preprocessing(df_temp)

    list_corpus = df_temp["Text"].tolist()
    list_labels = df_temp["Label"].tolist()

    X = count_vectorizer.transform(list_corpus)

    return X, list_labels

def basic_preprocessing(df):

    df_temp = df.copy(deep=True)
    df_temp = df_temp.rename(index=str, columns={'Clean_Titles_2': 'Text'})
    df_temp.loc[:, 'Text'] = [text_prepare(x) for x in df_temp['Text'].values]


    tokenizer = RegexpTokenizer(r'\w+')
    df_temp["Tokens"] = df_temp["Text"].apply(tokenizer.tokenize)

    return df_temp


def text_prepare(text):

    REPLACE_BY_SPACE_RE = re.compile('[/(){}\[\]\|@,;]')
    BAD_SYMBOLS_RE = re.compile('[^0-9a-z #+_]')
    STOPWORDS = set(stopwords.words('english'))

    text = text.lower()
    # replace REPLACE_BY_SPACE_RE symbols by space in text
    text = REPLACE_BY_SPACE_RE.sub('', text)
    # delete symbols which are in BAD_SYMBOLS_RE from text
    text = BAD_SYMBOLS_RE.sub('', text)
    words = text.split()
    i = 0
    while i < len(words):
        if words[i] in STOPWORDS:
            words.pop(i)
        else:
            i += 1
    text = ' '.join(map(str, words))  # delete stopwords from text

    return text

s = """Label      Text                                 Year
1         bla bla bla                           2000
0         add some words                        2012
1         this is just an example               1998
0         unfortunately the code does not work  2018
0         where should I apply the encoding?    2000
0         What am I missing here?               2005"""


df = pd.read_csv(StringIO(s), sep='\s{2,}')


X = df[['Text']]
y = df['Label']


X_train, X_test, y_train, y_test = train_test_split(
    X, y, test_size=0.2, random_state=40)

# Returning to one dataframe
training_set = pd.concat([X_train, y_train], axis=1)
# Separating classes
spam = training_set[training_set.Label == 1]
not_spam = training_set[training_set.Label == 0]

# Undersampling the majority
undersample = resample(not_spam,
                       replace=True,
                       # set the number of samples to equal the number of the minority class
                       n_samples=len(spam),
                       random_state=40)
# Returning to new training set
undersample_train = pd.concat([spam, undersample])

full_result = pd.DataFrame(columns=['Preprocessing', 'Model', 'Precision',
                                    'Recall', 'F1-score', 'Accuracy'])
train_x, train_y, count_vectorizer  = BOW(undersample_train)
testing_set = pd.concat([X_test, y_test], axis=1)
test_x, test_y = test_BOW(testing_set, count_vectorizer)



def training_naive(X_train_naive, X_test_naive, y_train_naive, y_test_naive, preproc):
    
    clf = MultinomialNB() # Gaussian Naive Bayes
    clf.fit(X_train_naive, y_train_naive)

    res = pd.DataFrame(columns = ['Preprocessing', 'Model', 'Precision', 'Recall', 'F1-score', 'Accuracy'])
    
    y_pred = clf.predict(X_test_naive)
    
    f1 = f1_score(y_pred, y_test_naive, average = 'weighted')
    pres = precision_score(y_pred, y_test_naive, average = 'weighted')
    rec = recall_score(y_pred, y_test_naive, average = 'weighted')
    acc = accuracy_score(y_pred, y_test_naive)
    
    res = res.append({'Preprocessing': preproc, 'Model': 'Naive Bayes', 'Precision': pres, 
                     'Recall': rec, 'F1-score': f1, 'Accuracy': acc}, ignore_index = True)

    return res 

full_result = full_result.append(training_naive(train_x, test_x, train_y, test_y, 'Count Vectorize'), ignore_index = True)

这篇关于在ML分类器中编码文本的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆