如何在IPython Notebook中缓存? [英] How to cache in IPython Notebook?
问题描述
环境:
- Python 3
- IPython 3.2
每当我关闭一台IPython笔记本并重新打开它时,我必须重新运行所有单元格。但有些单元格涉及密集计算。
Every time I shut down a IPython notebook and re-open it, I have to re-run all the cells. But some cells involve intensive computation.
相比之下,R中的 knitr
默认情况下将结果保存在缓存目录中所以只有新代码和新设置会调用计算。
By contrast, knitr
in R save the results in a cache directory by default so only new code and new settings would invoke computation.
我看了 ipycache
但似乎缓存了一个单元格而不是笔记本。在IPython中是否存在 knitr
缓存的副本?
I looked at ipycache
but it seems to cache a cell instead of the notebook. Is there a counterpart of cache of knitr
in IPython?
推荐答案
可以你举一个你想要做的例子?当我在昂贵的IPython笔记本中运行某些内容时,我几乎总是将其写入磁盘后记。例如,如果我的数据是JSON对象的列表,我将其作为行分隔的JSON格式字符串写入磁盘:
Can you give an example of what you are trying to do? When I run something in an IPython Notebook that is expensive I almost always write it to disk afterword. For example, if my data is a list of JSON object, I write it to disk as line separated JSON formatted strings:
with open('path_to_file.json', 'a') as file:
for item in data:
line = json.dumps(item)
file.write(line + '\n')
然后你可以用同样的方式回读数据:
You can then read back in the data the same way:
data = []
with open('path_to_file.json', 'a') as file:
for line in file:
data_item = json.loads(line)
data.append(data_item)
我认为这是一个很好的做法,因为它为您提供了备份。您也可以使用泡菜做同样的事情。如果您的数据非常大,您实际上可以 gzip.open
直接写入zip文件。
I think this is a good practice generally speaking because it provides you a backup. You can also use pickle for the same thing. If your data is really big you can actually gzip.open
to directly write to a zip file.
编辑
要将scikit学习模型保存到磁盘,请使用 joblib.pickle
。
To save a scikit learn model to disk use joblib.pickle
.
from sklearn.cluster import KMeans
km = KMeans(n_clusters=num_clusters)
km.fit(some_data)
from sklearn.externals import joblib
# dump to pickle
joblib.dump(km, 'model.pkl')
# and reload from pickle
km = joblib.load('model.pkl')
这篇关于如何在IPython Notebook中缓存?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!