Tensorflow对象检测在启动之前被杀死 [英] Tensorflow Object Detection Killed before starting

查看:226
本文介绍了Tensorflow对象检测在启动之前被杀死的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在运行docker image tensorflow:1.1.0。我通过在本地克隆tensorflow对象检测api github并为我的docker提供了到该文件夹​​的连接来添加它。我正在尝试重新创建他们的宠物示例。

I am running docker image tensorflow:1.1.0. I have added the tensorflow object detection api github by cloning it locally and giving my docker a connection to the folder. I am trying to recreate their pet example.

我相信我所有的代码和代码都在正确的位置。但是,当我尝试重新训练时,tensorflow在开始训练之前会自行杀死,但不会出现任何问题或错误。

I believe I have all code and code in the right places. However, when I try to retrain, tensorflow kills itself before starting to train, but does not give any issues or errors.

INFO:tensorflow:Starting Session.
INFO:tensorflow:Starting Queues.
INFO:tensorflow:global_step/sec: 0
Killed

我想我有东西或奇怪,但没有任何错误或输出,我不知道该看哪里!

I imagine I have something out or oder, but without any errors or output I don't know where to look!

我正在按照此处的指南在本地运行内容:链接。宠物数据是从同一GitHub获得的:链接。我也从同一GitHub获得了模型配置。 链接

I am following the guide here to run things locally: link. The pet data was obtained from the same GitHub: link. I got my model configuration from the same GitHub as well. link

我选择了inception_v2。

I choose inception_v2.

推荐答案

我现在意识到,我刚开始就快用光了内存。当我切换到 ssd_mobilenet_v1_coco_2017_11_17 并更改 batch_size = 1 时,这一切都奏效。

I now realize that I was simply running out of memory as soon as I started. When I switched to ssd_mobilenet_v1_coco_2017_11_17 and changed batch_size = 1, it all worked.

这仍然不能完全解决我的问题。我将不得不弄清楚如何为通话提供更多的内存。

This still doesn't entirely fix my problem though. I'll have to figure out how to provide more memory for the call.

这篇关于Tensorflow对象检测在启动之前被杀死的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆