sklearn TfidfVectorizer:通过不删除停用词来生成自定义 NGrams [英] sklearn TfidfVectorizer : Generate Custom NGrams by not removing stopword in them

查看:20
本文介绍了sklearn TfidfVectorizer:通过不删除停用词来生成自定义 NGrams的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

以下是我的代码:

sklearn_tfidf = TfidfVectorizer(ngram_range= (3,3),stop_words=stopwordslist, norm='l2',min_df=0, use_idf=True, smooth_idf=False, sublinear_tf=True)
sklearn_representation = sklearn_tfidf.fit_transform(documents)

它通过删除所有停用词来生成三元组.

It generates tri gram by removing all the stopwords.

我想让它允许那些中间有停用词的TRIGRAM(不是开始和结束)

是否需要为此编写处理器.需要建议.

Is there processor needs to be written for this. Need suggestions.

推荐答案

是的,您需要提供自己的分析器功能,该功能将根据您的要求将文档转换为功能.

Yes, you need to supply your own analyzer function which will convert the documents to the features as per your requirements.

根据 文档:

分析器:字符串、{‘word’、‘char’、‘char_wb’}或可调用

analyzer : string, {‘word’, ‘char’, ‘char_wb’} or callable

....
....
If a callable is passed it is used to extract the sequence of 
features out of the raw, unprocessed input.

在那个自定义的 callable 中,您需要首先将句子拆分为不同的部分,删除特殊字符,如逗号、大括号、符号等,将它们转换为小写,然后将它们转换为 n_grams.

In that custom callable you need to take care of first splitting the sentence into different parts, removing special chars like comma, braces, symbols etc, convert them to lower case, then convert them to n_grams.

默认实现按以下顺序处理单个句子:

The default implementation works on a single sentences in the following order:

  1. 解码:根据给定编码的句子(默认为'utf-8')
  2. 预处理:将句子转换为小写
  3. 标记化:从句子中获取单个单词标记(默认正则表达式选择 2 个或更多字母数字字符的标记)
  4. 停用词去除:去除上述步骤中出现在停用词中的单个词标记
  5. N_gram 创建:去除停用词后,将剩余的 token 排列在所需的 n_grams 中
  6. 删除太罕见或太常见的特征:删除频率大于 max_df 或小于 min_df 的词.
  1. Decoding: the sentence according to given encoding (default 'utf-8')
  2. Preprocessing: convert the sentence to lower case
  3. Tokenizing: get single word tokens from the sentence (The default regexp selects tokens of 2 or more alphanumeric characters)
  4. Stop word removal: remove the single word tokens from the above step which are present in stop words
  5. N_gram creation: After stop word removal, the remaining tokens are then arranged in the required n_grams
  6. Remove too rare or too common features: Remove words which have frequency greater than max_df or lower than min_df.

如果要将自定义可调用对象传递给 TfidfVectorizer 中的 analyzer 参数,则需要处理所有这些.

You need to handle all this if you want to pass a custom callable to the analyzer param in the TfidfVectorizer.

您可以扩展 TfidfVectorizer 类并且只覆盖最后两个步骤.像这样:

You can extend the TfidfVectorizer class and only override the last 2 steps. Something like this:

from sklearn.feature_extraction.text import TfidfVectorizer
class NewTfidfVectorizer(TfidfVectorizer):
    def _word_ngrams(self, tokens, stop_words=None):

        # First get tokens without stop words
        tokens = super(TfidfVectorizer, self)._word_ngrams(tokens, None)
        if stop_words is not None:
            new_tokens=[]
            for token in tokens:
                split_words = token.split(' ')

                # Only check the first and last word for stop words
                if split_words[0] not in stop_words and split_words[-1] not in stop_words:
                    new_tokens.append(token)
            return new_tokens

        return tokens

然后,像这样使用它:

vectorizer = NewTfidfVectorizer(stop_words='english', ngram_range=(3,3))
vectorizer.fit(data)

这篇关于sklearn TfidfVectorizer:通过不删除停用词来生成自定义 NGrams的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆