Python TfidfVectorizer抛出:空词汇;也许文档仅包含停用词“ [英] Python TfidfVectorizer throwing : empty vocabulary; perhaps the documents only contain stop words"
问题描述
我正在尝试使用Python的Tfidf转换文本语料库. 但是,当我尝试进行fit_transform转换时,出现值错误ValueError:空词汇;也许文件只包含停用词.
I'm trying to use Python's Tfidf to transform a corpus of text. However, when I try to fit_transform it, I get a value error ValueError: empty vocabulary; perhaps the documents only contain stop words.
In [69]: TfidfVectorizer().fit_transform(smallcorp)
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-69-ac16344f3129> in <module>()
----> 1 TfidfVectorizer().fit_transform(smallcorp)
/Users/maxsong/anaconda/lib/python2.7/site-packages/sklearn/feature_extraction/text.pyc in fit_transform(self, raw_documents, y)
1217 vectors : array, [n_samples, n_features]
1218 """
-> 1219 X = super(TfidfVectorizer, self).fit_transform(raw_documents)
1220 self._tfidf.fit(X)
1221 # X is already a transformed view of raw_documents so
/Users/maxsong/anaconda/lib/python2.7/site-packages/sklearn/feature_extraction/text.pyc in fit_transform(self, raw_documents, y)
778 max_features = self.max_features
779
--> 780 vocabulary, X = self._count_vocab(raw_documents, self.fixed_vocabulary)
781 X = X.tocsc()
782
/Users/maxsong/anaconda/lib/python2.7/site-packages/sklearn/feature_extraction/text.pyc in _count_vocab(self, raw_documents, fixed_vocab)
725 vocabulary = dict(vocabulary)
726 if not vocabulary:
--> 727 raise ValueError("empty vocabulary; perhaps the documents only"
728 " contain stop words")
729
ValueError: empty vocabulary; perhaps the documents only contain stop words
我在这里通读了SO问题:使用TfidfVectorizer scikit-learn 的自定义词汇表,并尝试ogrisel建议使用 TfidfVectorizer(** params).build_analyzer()(dataset2)检查文本分析步骤的结果,看来可以按预期工作:摘录如下:
I read through the SO question here: Problems using a custom vocabulary for TfidfVectorizer scikit-learn and tried ogrisel's suggestion of using TfidfVectorizer(**params).build_analyzer()(dataset2) to check the results of the text analysis step and that seems to be working as expected: snippet below:
In [68]: TfidfVectorizer().build_analyzer()(smallcorp)
Out[68]:
[u'due',
u'to',
u'lack',
u'of',
u'personal',
u'biggest',
u'education',
u'and',
u'husband',
u'to',
还有其他我做错的事情吗?我正在喂食的语料库只是一个由换行符打断的巨型长字符串.
Is there something else that I am doing wrong? the corpus I am feeding it is just one giant long string punctuated by newlines.
谢谢!
推荐答案
我想这是因为您只有一个字符串.尝试将其拆分为字符串列表,例如:
I guess it's because you just have one string. Try splitting it into a list of strings, e.g.:
In [51]: smallcorp
Out[51]: 'Ah! Now I have done Philosophy,\nI have finished Law and Medicine,\nAnd sadly even Theology:\nTaken fierce pains, from end to end.\nNow here I am, a fool for sure!\nNo wiser than I was before:'
In [52]: tf = TfidfVectorizer()
In [53]: tf.fit_transform(smallcorp.split('\n'))
Out[53]:
<6x28 sparse matrix of type '<type 'numpy.float64'>'
with 31 stored elements in Compressed Sparse Row format>
这篇关于Python TfidfVectorizer抛出:空词汇;也许文档仅包含停用词“的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!