急于加载整个模型以估计Tensorflow Serving的内存消耗 [英] Eager load the entire model to estimate memory consumption of Tensorflow Serving

查看:361
本文介绍了急于加载整个模型以估计Tensorflow Serving的内存消耗的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

Tensorflow Serving Lazy在执行预测时初始化模型DAG中的节点.这使得很难估计保存整个模型所需的内存(RAM). 是否有一种标准方法可以强制Tensorflow Serving将模型完全初始化/加载到内存中?

Tensorflow Serving lazy initializes nodes in the model DAG as predictions get executed. This makes it hard to estimate memory (RAM) that is required to hold the entire model. Is there a standard way to force Tensorflow Serving to fully initialize/load model into memory?

推荐答案

您可以使用模型预热来强制将所有组件加载到内存中. [1]

You can use model warmup to force all the components to be loaded into memory. [1]

[1] https://www.tensorflow.org/tfx/serving/saved_model_warmup

这篇关于急于加载整个模型以估计Tensorflow Serving的内存消耗的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆