如何从scikit-learn中的每个文件中从TD-idf向量中获得最高频率项? [英] How can i get highest frequency terms out of TD-idf vectors , for each files in scikit-learn?

查看:107
本文介绍了如何从scikit-learn中的每个文件中从TD-idf向量中获得最高频率项?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在尝试从scikit-learn中的向量中获得最高频率项. 从示例中可以为每个类别使用此功能,但我希望为类别内的每个文件使用它.

I am trying to get Highest frequency terms out of vectors in scikit-learn. From example It can be done using this for each Categories but i want it for each files inside categories.

https://github.com/scikit- Learn/scikit-learn/blob/master/examples/document_classification_20newsgroups.py

    if opts.print_top10:
        print "top 10 keywords per class:"
        for i, category in enumerate(categories):
            top10 = np.argsort(clf.coef_[i])[-10:]
            print trim("%s: %s" % (
            category, " ".join(feature_names[top10])))

我想对测试数据集中的每个文件(而不是每个类别)执行此操作. 我应该在哪里看?

I want to do this for each files from testing dataset instead of each categories. Where should i be looking?

谢谢

s/区分性/最高频率/g(对不起,抱歉)

s/discrimitive/highest frequency/g (Sorry for the confusions)

推荐答案

您可以将transform的结果与get_feature_names一起使用以获得给定文档的字数.

You can use the result of transform together with get_feature_names to obtain the term counts for a given document.

X = vectorizer.transform(docs)
terms = np.array(vectorizer.get_feature_names())
terms_for_first_doc = zip(terms, X.toarray()[0])

这篇关于如何从scikit-learn中的每个文件中从TD-idf向量中获得最高频率项?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆