OpenCV使用Python和烧瓶:HIGHGUI错误:libv4l无法ioctl S_FMT [英] Opencv using python and flask : HIGHGUI ERROR: libv4l unable to ioctl S_FMT

查看:555
本文介绍了OpenCV使用Python和烧瓶:HIGHGUI错误:libv4l无法ioctl S_FMT的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我试图在camera.py的摄像头上放置一个图像,然后发送到main.py;输出显示在烧瓶生成的本地服务器上。但是我遇到了以下错误:

pre $ lib $ 4 $ p $ lib $ 4 $ b $ p $ l $ $ $ $ $ $ $ libg4l2错误设置pixformat设备或资源繁忙
HIGHGUI ERROR libv4l无法ioctl S_FMT
libv4l2:错误设置pixformat:设备或资源繁忙
libv4l1:错误设置pixformat:设备或资源繁忙
HIGHGUI错误:libv4l无法ioctl VIDIOCSPICT

我使用了下面的代码:
$ b

main.py



 来自瓶子的导入Flask,render_template,Response $ b $从相机导入VideoCamera 


app = Flask (__name__)

@ app.route('/')
def index():
return render_template('index.html')

def gen(相机):
而真:
frame = camera.get_frame()
yield(b' - frame \\\\''
b'Content-类型:image / jpeg\r\\\
\r\\\
'+ frame + b'\r\\\
\r\\\
')


@ app.route('/ video_fe ()),
mimetype ='multipart / x-mixed-replace;}
def video_feed():
返回响应(gen(VideoCamera() border = frame')

if __name__ =='__main__':
app.run(host ='0.0.0.0',debug = True)



camera.py



  import cv2, time 
import numpy as np
$ b $ class VideoCamera(object):
def __init __(self):
#使用OpenCV从设备0中捕获。从网络摄像头捕捉
#,将下面的行注释掉,并使用视频文件
#来代替。
self.video = cv2.VideoCapture(0)
#如果您决定使用video.mp4,您必须在文件夹
#中将此文件作为main.py。
#self.video = cv2.VideoCapture('video.mp4')

def __del __(self):
self.video.release()

def get_frame(self):
成功,frame = self.video.read()
#我们使用Motion JPEG,但OpenCV默认捕捉原始图像
#所以我们必须将其编码为JPEG以正确显示
#视频流。
#time.sleep(.1)
face_cascade = cv2.CascadeClassifier('haarcascades / haarcascade_frontalface_default.xml')
eye_cascade = cv2.CascadeClassifier('haarcascades / haarcascade_mcs_eyepair_small.xml')

#载入覆盖图:glasses.png
imgGlasses = cv2.imread('4.png',-1)

print imgGlasses is None

#为眼镜创建遮罩
imgGlassesGray = cv2.cvtColor(imgGlasses,cv2.COLOR_BGR2GRAY)
#cv2.imwrite(imgGlassesGray.png,imgGlassesGray)

ret,orig_mask = cv2.threshold(imgGlassesGray,0,255,cv2.THRESH_BINARY)
#cv2.imwrite(orig_mask.png,orig_mask)

#创建倒置的蒙版对于眼镜
orig_mask_inv = cv2.bitwise_not(orig_mask)
#cv2.imwrite(orig_mask_inv.png,orig_mask_inv)

#将眼镜图像转换为BGR
#并保存原始图像大小(稍后在重新调整图像大小时使用)
imgGlasses = imgGlasses [:,:,0:3]
origGlassesHeight,origGlassesWidth = imgGlasses.shape [:2]

video_capture = cv2.VideoCapture(0)

#真:

#ret,frame = video_capture.read()

gray = cv2.cvtColor(frame,cv2.COLOR_BGR2GRAY)

faces = face_cascade.detectMultiScale(gray,1.3,5,flags = cv2.cv.CV_HAAR_SCALE_IMAGE)
$ b $ for(x,y,w,h)in:
cv2.rectangle( (x,y),(x + w,y + h),(255,0,0),2)
roi_gray = gray [y:y + h,x:x + w]
roi_color = frame [y:y + h,x:x + w]

eyes = eye_cascade.detectMultiScale(roi_gray)

(ex,ey,ew, eh)眼睛:
cv2.rectangle(roi_color,(ex,ey),(ex + ew,ey + eh),(0,255,0),1)

for ,ey,ew,eh)眼睛:
glassesWidth = 3 * ew
glassesHeight = glassesWidth * origGlassesHeight / origGlassesWidth

#居中眼镜
x1 = ex - 15
x2 = ex + ew + 15
y1 = ey - 5
y2 = ey + eh + 15

#检查剪切
如果x1 < 0:
x1 = 0
如果y1 < 0:
y1 = 0
if x2> w:
x2 = w
如果y2> h:
y2 = h

#重新计算眼镜图片的宽度和高度
glassesWidth = x2 - x1
glassesHeight = y2 - y1

#将原始图像和面具重新调整到眼镜大小
#在
以上计算玻璃= cv2.resize(imgGlasses,(glassesWidth,glassesHeight),interpolation = cv2.INTER_AREA)
mask = cv2.resize(orig_mask,(glassesWidth,glassesHeight),interpolation = cv2.INTER_AREA)
mask_inv = cv2.resize(orig_mask_inv,(glassesWidth,glassesHeight),interpolation = cv2.INTER_AREA)

#从背景中获取与眼镜大小相等的眼镜ROI image
roi = roi_color [y1:y2,x1:x2]

#roi_bg仅包含原始图像眼镜不是
#的区域是眼镜的大小。
roi_bg = cv2.bitwise_and(roi,roi,mask = mask)

#roi_fg包含眼镜只有在眼镜是
的地方roi_fg = cv2.bitwise_and(glasses ,glasses,mask = mask_inv)

#加入roi_bg和roi_fg
dst = cv2.add(roi_bg,roi_fg)

#以回溯原始图像
roi_color [y1:y2,x1:x2] = dst

break

ret,jpeg = cv2.imencode('。 jpg',frame)
return jpeg.tobytes()



index.html



 < html> 
< head>
< title>视频流式演示< / title>
href ={{url_for('static',
filename ='styles.css')}} >
< style>
body {
background-image:url(http://cdn.wall88.com/51b487f75df1050061.jpg);
背景重复:不重复;
}
< / style>

< / style>
< / head>
< body>
< h1>视频串流演示< / h1>
< / body>
< / html>


解决方案

尝试删除 debug = True 在 main.py 中。这对我造成了问题。

I am trying to place an image over a webcam feed in camera.py and send to main.py; output displayed in a flask generated local server . But I encountered the following error

libv4l2: error setting pixformat: Device or resource busy
HIGHGUI ERROR: libv4l unable to ioctl S_FMT
libv4l2: error setting pixformat: Device or resource busy
libv4l1: error setting pixformat: Device or resource busy
HIGHGUI ERROR: libv4l unable to ioctl VIDIOCSPICT

I used the following code:

main.py

from flask import Flask, render_template, Response
from camera import VideoCamera


app = Flask(__name__)

@app.route('/')
def index():
    return render_template('index.html')

def gen(camera):
    while True:
        frame = camera.get_frame()
        yield (b'--frame\r\n'
               b'Content-Type: image/jpeg\r\n\r\n' + frame + b'\r\n\r\n')


@app.route('/video_feed')
def video_feed():
    return Response(gen(VideoCamera()),
                    mimetype='multipart/x-mixed-replace; boundary=frame')

if __name__ == '__main__':
    app.run(host='0.0.0.0', debug=True)

camera.py

import cv2, time
import numpy as np

class VideoCamera(object):
    def __init__(self):
        # Using OpenCV to capture from device 0. If you have trouble capturing
        # from a webcam, comment the line below out and use a video file
        # instead.
        self.video = cv2.VideoCapture(0)
        # If you decide to use video.mp4, you must have this file in the folder
        # as the main.py.
        # self.video = cv2.VideoCapture('video.mp4')

    def __del__(self):
        self.video.release()

    def get_frame(self):
        success, frame = self.video.read()
        # We are using Motion JPEG, but OpenCV defaults to capture raw images,
        # so we must encode it into JPEG in order to correctly display the
        # video stream.
        #time.sleep(.1)
        face_cascade = cv2.CascadeClassifier('haarcascades/haarcascade_frontalface_default.xml')
    eye_cascade = cv2.CascadeClassifier('haarcascades/haarcascade_mcs_eyepair_small.xml')

    # Load the overlay image: glasses.png
    imgGlasses = cv2.imread('4.png', -1)

    print imgGlasses is None

    # Create the mask for the glasses
    imgGlassesGray = cv2.cvtColor(imgGlasses, cv2.COLOR_BGR2GRAY)
    #cv2.imwrite("imgGlassesGray.png", imgGlassesGray)

    ret, orig_mask = cv2.threshold(imgGlassesGray, 0, 255, cv2.THRESH_BINARY)
    #cv2.imwrite("orig_mask.png", orig_mask)

    # Create the inverted mask for the glasses
    orig_mask_inv = cv2.bitwise_not(orig_mask)
    #cv2.imwrite("orig_mask_inv.png", orig_mask_inv)

    # Convert glasses image to BGR
    # and save the original image size (used later when re-sizing the image)
    imgGlasses = imgGlasses[:,:,0:3]
    origGlassesHeight, origGlassesWidth = imgGlasses.shape[:2]

    video_capture = cv2.VideoCapture(0)

    #while True:

    #ret, frame = video_capture.read()

    gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)

    faces = face_cascade.detectMultiScale(gray, 1.3, 5, flags=cv2.cv.CV_HAAR_SCALE_IMAGE)

    for (x,y,w,h) in faces:
        cv2.rectangle(frame,(x,y),(x+w,y+h),(255,0,0),2)
        roi_gray = gray[y:y+h, x:x+w]
        roi_color = frame[y:y+h, x:x+w]

        eyes = eye_cascade.detectMultiScale(roi_gray)

        for (ex,ey,ew,eh) in eyes:
            cv2.rectangle(roi_color,(ex,ey),(ex+ew,ey+eh),(0,255,0),1)

        for (ex, ey, ew, eh) in eyes:
            glassesWidth = 3*ew
            glassesHeight = glassesWidth * origGlassesHeight / origGlassesWidth

            # Center the glasses
            x1 = ex - 15
            x2 = ex + ew + 15
            y1 = ey - 5
            y2 = ey + eh + 15

            # Check for clipping
            if x1 < 0:
                x1 = 0
            if y1 < 0:
                y1 = 0
            if x2 > w:
                x2 = w
            if y2 > h:
                y2 = h

            # Re-calculate the width and height of the glasses image
            glassesWidth = x2 - x1
            glassesHeight = y2 - y1

            # Re-size the original image and the masks to the glasses sizes
            # calcualted above
            glasses = cv2.resize(imgGlasses, (glassesWidth,glassesHeight), interpolation = cv2.INTER_AREA)
            mask = cv2.resize(orig_mask, (glassesWidth,glassesHeight), interpolation = cv2.INTER_AREA)
            mask_inv = cv2.resize(orig_mask_inv, (glassesWidth,glassesHeight), interpolation = cv2.INTER_AREA)

            # take ROI for glasses from background equal to size of glasses image
            roi = roi_color[y1:y2, x1:x2]

            # roi_bg contains the original image only where the glasses is not
            # in the region that is the size of the glasses.
            roi_bg = cv2.bitwise_and(roi,roi,mask = mask)

            # roi_fg contains the image of the glasses only where the glasses is
            roi_fg = cv2.bitwise_and(glasses,glasses,mask = mask_inv)

            # join the roi_bg and roi_fg
            dst = cv2.add(roi_bg,roi_fg)

            # place the joined image, saved to dst back over the original image
            roi_color[y1:y2, x1:x2] = dst

            break

        ret, jpeg = cv2.imencode('.jpg', frame)
        return jpeg.tobytes()

index.html

<html>
  <head>
    <title>Video Streaming Demonstration</title>
    <link type="text/css" rel="stylesheet"
            href="{{ url_for('static',
                  filename='styles.css')}}" />
<style>
body {
    background-image: url(http://cdn.wall88.com/51b487f75df1050061.jpg);
    background-repeat: no-repeat;
}
</style>

</style>               
  </head>
  <body>
    <h1>Video Streaming Demonstration</h1>
    <img id="bg" align="middle" src="{{ url_for('video_feed') }}">
  </body>
</html>

解决方案

Try removing debug=True in the main.py. That was causing the problem for me.

这篇关于OpenCV使用Python和烧瓶:HIGHGUI错误:libv4l无法ioctl S_FMT的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
相关文章
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆