使用大型txt文件训练Gensim word2vec [英] train Gensim word2vec using large txt file

查看:368
本文介绍了使用大型txt文件训练Gensim word2vec的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我有一个很大的txt文件(150MG),像这样

I have a large txt file(150MG) like this

'intrepid', 'bumbling', 'duo', 'deliver', 'good', 'one', 'better', 'offering', 'considerable', 'cv', 'freshly', 'qualified', 'private', ...

我想使用该文件训练word2vec模型模型,但它给我带来RAM问题.我不知道如何将txt文件馈送到word2vec模型.这是我的代码.我知道我的代码有问题,但我不知道在哪里是吗?

I wanna train word2vec model model using that file but it gives me RAM problem.i dont know how to feed txt file to word2vec model.this is my code.i know that my code has problem but i don't know where is it.

import gensim 


f = open('your_file1.txt')
for line in f:
    b=line
   model = gensim.models.Word2Vec([b],min_count=1,size=32)

w1 = "bad"
model.wv.most_similar (positive=w1)

推荐答案

您可以使迭代器一次读取一行文件,而不是一次读取内存中的所有内容.以下应该可以工作:

You can make an iterator that reads your file one line at a time instead of reading everything in memory at once. The following should work:

class SentenceIterator: 
    def __init__(self, filepath): 
        self.filepath = filepath 

    def __iter__(self): 
        for line in open(self.filepath): 
            yield line.split() 

sentences = SentenceIterator('datadir/textfile.txt') 
model = Word2Vec(sentences)

这篇关于使用大型txt文件训练Gensim word2vec的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆