在我自己的数据集上屏蔽RCNN资源耗尽(OOM) [英] Mask RCNN Resource exhausted (OOM) on my own dataset

查看:143
本文介绍了在我自己的数据集上屏蔽RCNN资源耗尽(OOM)的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

Mask RCNN资源耗尽需要帮助-

硬件-i7-8700、32G RAM,单个ASUS ROG STRIX 1080ti(11GB)

虚拟环境设置-tensorflow-gpu == 1.5.0,python == 3.6.6,Cuda == 9.0.176,cudnn == 7.2.1

图像分辨率-最大宽度= 900像素,最大高度= 675像素,最小宽度= 194像素,最小高度= 150像素,用于训练的11张图像

S/W-IMAGES_PER_GPU = 1(在xxConfig(Config)类中,xxx.py中),BACKBONE ="resnet50",POST_NMS_ROIS_TRAINING = 1000,POST_NMS_ROIS_INFERENCE = 500,IMAGE_RESIZE_MODE ="square",IMAGE_MIN_DIM = 400, ,TRAIN_ROIS_PER_IMAGE = 100

nvidia-smi对我来说很奇怪,显示<用于python的300MB,终端显示以下内容,但是

ResourceExhaustedError(请参见上面的回溯):在分配带有shape [3,3,256,256]的张量并在/job:localhost/replica:0/task:0/device:GPU:0上通过分配器GPU_0_bfc键​​入float [[节点:fpn_p5/random_uniform/RandomUniform = RandomUniformT = DT_INT32,dtype = DT_FLOAT,seed = 87654321,seed2 = 5038409,_device ="/job:localhost/副本:0/task:0/device:GPU:0"]]

nvidia-smi

在运行代码时出现错误日志

解决方案

用7.0.5替换cudnn 7.2.1之后,我现在可以使用1080ti gpu训练Mask-RCNN,而不会出现资源枯竭(OOM)问题. /p>

Help needed for Mask RCNN Resource Exhausted -

H/W - i7-8700, 32G RAM, single ASUS ROG STRIX 1080ti (11GB)

Virtual env setup - tensorflow-gpu==1.5.0, python==3.6.6, Cuda==9.0.176, cudnn==7.2.1

image resolution - maximum width=900 pixels, maximum height=675pixels, minimum width=194 pixels, minimum height=150 pixels, 11 images for training

S/W - IMAGES_PER_GPU = 1 (in class xxConfig(Config), xxx.py), BACKBONE = "resnet50", POST_NMS_ROIS_TRAINING = 1000, POST_NMS_ROIS_INFERENCE = 500, IMAGE_RESIZE_MODE = "square", IMAGE_MIN_DIM = 400, IMAGE_MAX_DIM = 512, TRAIN_ROIS_PER_IMAGE = 100

What strange to me was, nvidia-smi showed < 300MB used for python, the terminal showed the following, however,

ResourceExhaustedError (see above for traceback): OOM when allocating tensor with shape[3,3,256,256] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc [[Node: fpn_p5/random_uniform/RandomUniform = RandomUniformT=DT_INT32, dtype=DT_FLOAT, seed=87654321, seed2=5038409, _device="/job:localhost/replica:0/task:0/device:GPU:0"]]

nvidia-smi

error-log when running the code

解决方案

After replacing cudnn 7.2.1 with 7.0.5, I am now able to train Mask-RCNN using 1080ti gpu without a resource exhausted (OOM) issue.

这篇关于在我自己的数据集上屏蔽RCNN资源耗尽(OOM)的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆