将张量流模型存储在内存中 [英] Storing tensorflow models in memory

查看:35
本文介绍了将张量流模型存储在内存中的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在编写的程序涉及在运行时在模型之间切换.

The program I'm writing involves switching between models during run-time.

我目前正在使用 Saver 从磁盘保存/加载模型,如下所示:https://www.tensorflow.org/api_docs/python/state_ops/saving_and_restoring_variables#Saver.

I am currently using Saver to save/load models from the disk as specified here: https://www.tensorflow.org/api_docs/python/state_ops/saving_and_restoring_variables#Saver.

模型相当小,可以存储在内存中,所以我想知道是否有人知道在内存中存储和恢复这些模型而不是将它们保存到磁盘的方法.

The models are fairly small and can be stored in memory, so I was wondering if anyone knows of a way to store and restore these models in-memory instead of saving them to disk.

我尝试修改 tensorflow 源以将模型保存到内存中,但是 gen_io_ops 似乎是在编译时生成的.另一种可能的方法是使用内存映射文件.有人知道更简单的方法吗?

I tried to modify the tensorflow source to save the model to memory however gen_io_ops seems to be generated during compile time. Another possible way is to use memory mapped files. Does anyone know of an easier way?

推荐答案

我只会有两个不同的会话,它们有自己的计算图.或者,您可以在同一会话中复制计算图(变量、操作等的两个副本).然后你可以调用 sess.run(comp1 if useCompOne else comp2),但是你想设置它.

I would just have two different sessions with their own computation graphs. Alternatively, you could just duplicate the computation graph (two copies of the variables, operations, etc) in the same session. Then you would call sess.run(comp1 if useCompOne else comp2), however you'd like to set it up.

这篇关于将张量流模型存储在内存中的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆