Apache mod_wsgi django调用keras模型时如何释放占用的GPU内存? [英] How to release the occupied GPU memory when calling keras model by Apache mod_wsgi django?

查看:33
本文介绍了Apache mod_wsgi django调用keras模型时如何释放占用的GPU内存?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我的服务器配置如下:

  1. apache 2.4.23.
  2. Mod_wsgi 4.5.9

通过使用Django框架和apache服务器,我们称之为Keras深度学习模型.并且模型调用成功后,模型一直运行在GPU内存中,导致GPU内存不能通过关闭apache服务器释放.

By using the Django framework and apache server, we call the Keras deep learning model. And after the successful calling of the model, the model has been always running in the GPU memory, which causes the GPU memory can not be released except by shutting down the apache server.

那么,Apache+Mod_wsgi+Django调用Keras模型时有没有办法控制GPU内存的释放?

So, is there any way to control the release of GPU memory when calling a Keras model by Apache+Mod_wsgi+Django?

谢谢!

运行时内存占用截图

推荐答案

from keras import backend as K
K.clear_session()

这将清除当前会话(图形),因此应从 GPU 中删除过时的模型.如果它不起作用,您可能需要删除模型"并重新加载它.

This will clear the current session (Graph) and so the stale model should be removed from GPU. If it didn't work, you might need to 'del model' and reload it again.

这篇关于Apache mod_wsgi django调用keras模型时如何释放占用的GPU内存?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆