如何解决,“错误:(-215)pbBlob.raw_data_type()==函数blobFromProto中的caffe :: FLOAT16"在OpenCV中运行神经网络时 [英] How to fix, "error: (-215) pbBlob.raw_data_type() == caffe::FLOAT16 in function blobFromProto" when running neural network in OpenCV

查看:300
本文介绍了如何解决,“错误:(-215)pbBlob.raw_data_type()==函数blobFromProto中的caffe :: FLOAT16"在OpenCV中运行神经网络时的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我目前正在尝试使用Nvidia DIGITS在自定义数据集上训练CNN以进行对象检测,最终我想在Nvidia Jetson TX2上运行该网络.我按照推荐的说明从Docker下载了DIGITS映像,并且我能够以合理的精度成功训练网络.但是当我尝试使用OpenCv在python中运行网络时,出现此错误,

I am currently trying to use Nvidia DIGITS to train a CNN on a custom dataset for object detection, and eventually I want to run that network on an Nvidia Jetson TX2. I followed the recommended instructions to download the DIGITS image from Docker, and I am able to successfully train a network with reasonable accuracy. But when I try to run my network in python using OpenCv, I get this error,

错误:(-215)pbBlob.raw_data_type()==功能中的caffe :: FLOAT16 blobFromProto"

"error: (-215) pbBlob.raw_data_type() == caffe::FLOAT16 in function blobFromProto"

我读过其他一些线程,这是由于DIGITS以与OpenCv的DNN功能不兼容的形式存储其网络的事实.

I have read in a few other threads that this is due to the fact that DIGITS stores its networks in a form that is incompatible with OpenCv's DNN functionality.

在训练我的网络之前,我尝试过选择DIGITS中的选项,该选项应该使网络与其他软件兼容,但是似乎根本不会改变网络,并且在运行我的网络时出现相同的错误python脚本.这是我运行的导致错误的脚本(它来自本教程

Before training my network, I have tried selecting the option in DIGITS that is supposed to make the network compatible with other software, however that doesn't seem to change the network at all, and I get the same error when running my python script. This is the script I run that creates the error (it comes from this tutorial https://www.pyimagesearch.com/2017/09/11/object-detection-with-deep-learning-and-opencv/)

# import the necessary packages
import numpy as np
import argparse
import cv2

# construct the argument parse and parse the arguments
ap = argparse.ArgumentParser()
ap.add_argument("-i", "--image", required=True,
help="path to input image")
ap.add_argument("-p", "--prototxt", required=True,
help="path to Caffe 'deploy' prototxt file")
ap.add_argument("-m", "--model", required=True,
help="path to Caffe pre-trained model")
ap.add_argument("-c", "--confidence", type=float, default=0.2,
help="minimum probability to filter weak detections")
args = vars(ap.parse_args())

# initialize the list of class labels MobileNet SSD was trained to
# detect, then generate a set of bounding box colors for each class
CLASSES = ["dontcare", "HatchPanel"]
COLORS = np.random.uniform(0, 255, size=(len(CLASSES), 3))
# load our serialized model from disk
print("[INFO] loading model...")
net = cv2.dnn.readNetFromCaffe(args["prototxt"], args["model"])

# load the input image and construct an input blob for the image
# by resizing to a fixed 300x300 pixels and then normalizing it
# (note: normalization is done via the authors of the MobileNet SSD
# implementation)
image = cv2.imread(args["image"])
(h, w) = image.shape[:2]
blob = cv2.dnn.blobFromImage(cv2.resize(image, (300, 300)), 0.007843,
    (300, 300), 127.5)
# pass the blob through the network and obtain the detections and
# predictions
print("[INFO] computing object detections...")
net.setInput(blob)
detections = net.forward()

# loop over the detections
for i in np.arange(0, detections.shape[2]):
    # extract the confidence (i.e., probability) associated with the  
    # prediction
    confidence = detections[0, 0, i, 2]

    # filter out weak detections by ensuring the `confidence` is
    # greater than the minimum confidence
    if confidence > args["confidence"]:
        # extract the index of the class label from the `detections`,
        # then compute the (x, y)-coordinates of the bounding box for
        # the object
        idx = int(detections[0, 0, i, 1])
        box = detections[0, 0, i, 3:7] * np.array([w, h, w, h])
        (startX, startY, endX, endY) = box.astype("int")

        # display the prediction
        label = "{}: {:.2f}%".format(CLASSES[idx], confidence * 100)
        print("[INFO] {}".format(label))
        cv2.rectangle(image, (startX, startY), (endX, endY),
            COLORS[idx], 2)
        y = startY - 15 if startY - 15 > 15 else startY + 15
        cv2.putText(image, label, (startX, y),
            cv2.FONT_HERSHEY_SIMPLEX, 0.5, COLORS[idx], 2)
# show the output image
cv2.imshow("Output", image)
cv2.waitKey(0)

这应该输出在脚本调用中指定的图像,并在图像上方绘制神经网络的输出.但是,脚本会因前面提到的错误而崩溃.我见过其他人也有同样的错误,但是到目前为止,还没有一个人提出与当前版本的DIGITS兼容的解决方案.

This should output the image specified in the call to the script, with the output of the neural network drawn over top of the image. But instead, the script crashes with the before mentioned error. I have seen other threads with people that have this same error, but as of yet, none of them have arrived at a solution that works with the current version of DIGITS.

我的完整设置如下:

操作系统:Ubuntu 16.04

OS: Ubuntu 16.04

Nvidia DIGITS Docker映像版本:19.01-caffe

Nvidia DIGITS Docker Image Version: 19.01-caffe

DIGITS版本:6.1.1

DIGITS Version: 6.1.1

Caffe版本:0.17.2

Caffe Version: 0.17.2

咖啡香精:Nvidia

Caffe Flavor: Nvidia

OpenCV版本:4.0.0

OpenCV Version: 4.0.0

Python版本:3.5

Python Version: 3.5

非常感谢您的帮助.

推荐答案

哈里森·麦金太尔,谢谢!此PR对其进行了修复: https://github.com/opencv/opencv/pull/13800.请注意,存在一个类型为"ClusterDetections"的图层. OpenCV不支持它,但是您可以使用自定义图层机制来实现它(请参见教程)

Harrison McIntyre, Thank you! This PR fixes it: https://github.com/opencv/opencv/pull/13800. Please note that there is a layer with type "ClusterDetections". It's not supported by OpenCV but you can implement it using custom layers mechanic (see a tutorial)

这篇关于如何解决,“错误:(-215)pbBlob.raw_data_type()==函数blobFromProto中的caffe :: FLOAT16"在OpenCV中运行神经网络时的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆