适用于大型数据集的主题建模工具(30GB) [英] Topic Modeling tool for large data set (30GB)

查看:58
本文介绍了适用于大型数据集的主题建模工具(30GB)的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在寻找一些适用于大型数据集的主题建模工具.

I'm looking for some topic modeling tool which can be applicable to a large data set.

我当前用于训练的数据集为30 GB.我尝试了 MALLET主题建模,但总是遇到OutOfMemoryError.

My current data set for training is 30 GB. I tried MALLET topic modeling, but always I got OutOfMemoryError.

如果您有任何提示,请告诉我.

If you have any tips, please let me know.

推荐答案

您可以使用许多选项,并且此响应与它们的比较方式不可知.

There are many options available to you, and this response is agnostic as to how they compare.

我认为具有如此大数据集的重要事情是所使用的近似后验推断方法,而不一定是软件实现.根据本文所述,在线变分贝叶斯推理效率更高,在时间和空间上,要比吉布斯采样.尽管我从未使用过它,但 gensim 软件包看起来不错.它使用python,并且在项目的网页中有深入的教程.

I think that the important thing with such a large dataset is the method of approximate posterior inference used, and not necessarily the software implementation. According to this paper, online Variational Bayes inference is much more efficient, in terms of time and space, than Gibbs sampling. Though I've never used it, the gensim package looks good. It's in python, and there are in-depth tutorials at the project's webpage.

有关直接来自源代码的代码,请参阅的作者之一David Blei的网页.LDA 模型,请此处.他以多种语言(R,Java,C ++)链接到多个实现.

For code that comes straight from the source, see the webpage of David Blei, one of the authors of the LDA model, here. He links to more than a few implementations, in a variety of languages (R, Java, C++).

这篇关于适用于大型数据集的主题建模工具(30GB)的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆