估计服务于Keras模型所需的资源 [英] Estimate required resources to serve Keras model

查看:50
本文介绍了估计服务于Keras模型所需的资源的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我有一个Keras模型(.hdf5),我想将其部署在云中进行预测.现在,我想估计一下我需要多少资源(CPU,GPU,RAM等).

I have a Keras model (.hdf5) that I would like to deploy in the cloud for prediction. I now wish to estimate how much resources I need for this (CPU, GPU, RAM, ...).

有人对功能/经验法则有建议吗?我找不到任何有用的东西.预先感谢!

Does anyone have a suggestion for functions / rules of thumb that could help with this? I was unable to find anything useful. Thanks in advance!

推荐答案

我认为最现实的估计是运行模型并查看需要多少资源. tophtop将显示CPU和RAM的负载,但是在GPU内存的情况下要复杂一些,因为出于性能原因TensorFlow(Keras后端最受欢迎的选项)保留了所有可用内存.

I think the most realistic estimation would be to run the model and see how much resources does it take. top or htop will show you the CPU and RAM load, but in case of GPU memory it is a bit more complicated, since TensorFlow (most popular option for the Keras backend) reserves all the available memory for performance reasons.

您必须告诉TensorFlow不要占用所有可用内存,而是按需分配它. 这是在Keras中执行此操作的方法:

You have to tell TensorFlow not to take all available memory but allocate it on demand. Here is how to perform this in Keras:

import tensorflow as tf
import keras.backend as K
config = tf.ConfigProto()
config.gpu_options.per_process_gpu_memory_fraction=0.2  # Initially allocate only 20% of memory
config.gpu_options.allow_growth = True  # dynamically grow the memory used on the GPU
config.log_device_placement = True  # to log device placement (on which device the operation ran)
                                    # (nothing gets printed in Jupyter, only if you run it standalone)
sess = tf.Session(config=config)
K.set_session(sess)  # set this TensorFlow session as the default session for Keras

https://github.com/keras-team/keras /issues/4161#issuecomment-366031228

然后,运行watch nvidia-smi并查看将占用多少内存.

Then, run watch nvidia-smi and see how much memory will be taken.

这篇关于估计服务于Keras模型所需的资源的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆