泡菜转储巨大的文件而没有内存错误 [英] Pickle dump huge file without memory error

查看:86
本文介绍了泡菜转储巨大的文件而没有内存错误的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我有一个程序,可以根据已知的信息基本上调整某些事情发生的可能性.我的数据文件已被保存为pickle词典对象,位于Dictionary.txt.

I have a program where I basically adjust the probability of certain things happening based on what is already known. My file of data is already saved as a pickle Dictionary object at Dictionary.txt.

问题在于,每次我运行程序时,它都会拉入Dictionary.txt,将其转换为字典对象,对其进行编辑并覆盖Dictionary.txt.由于Dictionary.txt为123 MB,因此占用大量内存.转储时出现 MemoryError ,当我将其插入时,一切似乎都很好.

The problem is that everytime that I run the program it pulls in the Dictionary.txt, turns it into a dictionary object, makes it's edits and overwrites Dictionary.txt. This is pretty memory intensive as the Dictionary.txt is 123 MB. When I dump I am getting the MemoryError, everything seems fine when I pull it in..

  • 是否有更好(更有效)的编辑方式? (也许不需要每次都覆盖整个文件)

  • Is there a better (more efficient) way of doing the edits? (Perhaps w/o having to overwrite the entire file everytime)

有没有一种方法可以调用垃圾回收(通过gc模块)? (我已经通过gc.enable()自动启用了它)

Is there a way that I can invoke garbage collection (through gc module)? (I already have it auto-enabled via gc.enable())

我知道,除了readlines()之外,您还可以逐行阅读.当程序中已经有完整的Dictionary对象File时,有没有办法逐行增量地编辑字典.

I know that besides readlines() you can read line-by-line. Is there a way to edit the dictionary incrementally line-by-line when I already have a fully completed Dictionary object File in the program.

还有其他解决方案吗?

谢谢您的时间.

推荐答案

我遇到了同样的问题.我使用joblib并完成了工作.如果有人想知道其他可能性.

I was having the same issue. I use joblib and work was done. In case if someone wants to know other possibilities.

将模型保存到磁盘

from sklearn.externals import joblib
filename = 'finalized_model.sav'
joblib.dump(model, filename)  

过一会儿...从磁盘加载模型

some time later... load the model from disk

loaded_model = joblib.load(filename)
result = loaded_model.score(X_test, Y_test) 

print(result)

这篇关于泡菜转储巨大的文件而没有内存错误的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆