从 pandas DataFrame创建术语密度矩阵的有效方法 [英] Efficient way to create term density matrix from pandas DataFrame

查看:85
本文介绍了从 pandas DataFrame创建术语密度矩阵的有效方法的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在尝试从熊猫数据框中创建一个术语密度矩阵,因此我可以对数据框中出现的术语进行评分.我还希望能够保留数据的空间"方面(有关我的意思的示例,请参阅文章结尾处的评论).

I am trying to create a term density matrix from a pandas dataframe, so I can rate terms appearing in the dataframe. I also want to be able to keep the 'spatial' aspect of my data (see comment at the end of post for an example of what I mean).

我是熊猫和NLTK的新手,所以我希望我的问题可以通过一些现有工具解决.

I am new to pandas and NLTK, so I expect my problem to be soluble with some existing tools.

我有一个包含两个感兴趣的列的数据框:说标题"和页面"

I have a dataframe which contains two columns of interest: say 'title' and 'page'

    import pandas as pd
    import re

    df = pd.DataFrame({'title':['Delicious boiled egg','Fried egg ','Split orange','Something else'], 'page':[1, 2, 3, 4]})
    df.head()

       page                 title
    0     1  Delicious boiled egg
    1     2            Fried egg 
    2     3          Split orange
    3     4        Something else

我的目标是清理文本,并将感兴趣的条款传递给TDM数据框.我使用两个函数来清理字符串

My goal is to clean up the text, and pass terms of interest to a TDM dataframe. I use two functions to help me clean up the strings

    import nltk.classify
    from nltk.tokenize import wordpunct_tokenize
    from nltk.corpus import stopwords
    import string   

    def remove_punct(strin):
        '''
        returns a string with the punctuation marks removed, and all lower case letters
        input: strin, an ascii string. convert using strin.encode('ascii','ignore') if it is unicode 
        '''
        return strin.translate(string.maketrans("",""), string.punctuation).lower()

    sw = stopwords.words('english')

    def tok_cln(strin):
        '''
        tokenizes string and removes stopwords
        '''
        return set(nltk.wordpunct_tokenize(strin)).difference(sw)

还有一个执行数据框操作的函数

And one function which does the dataframe manipulation

    def df2tdm(df,titleColumn,placementColumn,newPlacementColumn):
        '''
        takes in a DataFrame with at least two columns, and returns a dataframe with the term density matrix
        of the words appearing in the titleColumn
        Inputs: df, a DataFrame containing titleColumn, placementColumn among others
        Outputs: tdm_df, a DataFrame containing newPlacementColumn and columns with all the terms in df[titleColumn]
        '''
        tdm_df = pd.DataFrame(index=df.index, columns=[newPlacementColumn])
        tdm_df = tdm_df.fillna(0)
        for idx in df.index:
            for word in tok_cln( remove_punct(df[titleColumn][idx].encode('ascii','ignore')) ):
                if word not in tdm_df.columns:
                    newcol = pd.DataFrame(index = df.index, columns = [word])
                    tdm_df = tdm_df.join(newcol)
        tdm_df[newPlacementColumn][idx] = df[placementColumn][idx]
        tdm_df[word][idx] = 1
        return tdm_df.fillna(0,inplace = False)

    tdm_df = df2tdm(df,'title','page','pub_page')
    tdm_df.head()

这将返回

      pub_page boiled egg delicious fried orange split something else
    0        1      1   1         1     0      0     0         0    0
    1        2      0   1         0     1      0     0         0    0
    2        3      0   0         0     0      1     1         0    0
    3        4      0   0         0     0      0     0         1    1

但是在解析大型集合(输出数十万行,数千列)时,它的速度很慢.我的两个问题:

But it is painfully slow when parsing large sets (output of hundred thousands of rows, thousands of columns). My two questions:

我可以加快实施速度吗?

Can I speed up this implementation?

还有其他我可以用来完成此任务的工具吗?

Is there some other tool I could use to get this done?

我希望能够保留数据的空间"方面,例如,如果鸡蛋"在1-10页中经常出现,然后在500-520页中经常出现,我想知道这一点. /p>

I want to be able to keep the 'spatial' aspect of my data, for example if 'egg' appears very often in pages 1-10 and then reappears often in pages 500-520, I want to know that.

推荐答案

您可以使用scikit-learn的

You can use scikit-learn's CountVectorizer:

In [14]: from sklearn.feature_extraction.text import CountVectorizer

In [15]: countvec = CountVectorizer()

In [16]: countvec.fit_transform(df.title)
Out[16]: 
<4x8 sparse matrix of type '<type 'numpy.int64'>'
    with 9 stored elements in Compressed Sparse Column format>

它以稀疏表示形式返回术语文档矩阵,因为这种矩阵通常很大而且很稀疏.

It returns the term document matrix in sparse representation because such matrix is usually huge and, well, sparse.

对于您的特定示例,我想将其转换回DataFrame仍然可以:

For your particular example I guess converting it back to a DataFrame would still work:

In [17]: pd.DataFrame(countvec.fit_transform(df.title).toarray(), columns=countvec.get_feature_names())
Out[17]: 
   boiled  delicious  egg  else  fried  orange  something  split
0       1          1    1     0      0       0          0      0
1       0          0    1     0      1       0          0      0
2       0          0    0     0      0       1          0      1
3       0          0    0     1      0       0          1      0

[4 rows x 8 columns]

这篇关于从 pandas DataFrame创建术语密度矩阵的有效方法的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆