在opencv 3.4.2中更改帧速率 [英] change frame rate in opencv 3.4.2

查看:1107
本文介绍了在opencv 3.4.2中更改帧速率的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我想减少摄像头每秒获取的帧数,这是我正在使用的代码

I want to reduce the number of frames acquired per second in a webcam, this is the code that I'm using

#!/usr/bin/env python

import cv2

cap = cv2.VideoCapture(0)
cap.set(cv2.CAP_PROP_FPS, 10)
fps = int(cap.get(5))
print("fps:", fps)

while(cap.isOpened()):

    ret,frame = cap.read()
    if not ret:
        break

    cv2.imshow('frame', frame)

    k = cv2.waitKey(1)
    if k == 27:
        break

但是它没有生效,默认情况下我仍然有30 fps,而不是由cap.set(cv2.CAP_PROP_FPS, 10)设置的10 fps.我想降低帧速率,因为我有一个手检测器,要花很多时间来处理每个帧,我不能将帧存储在缓冲区中,因为它会在以前的位置检测手.我可以使用计时器或其他工具来运行检测器,但我认为更改fps是更简单的方法,但是它不起作用,我也不知道为什么.

But it doesn't take effect, I still have 30 fps by default instead of 10 set up by cap.set(cv2.CAP_PROP_FPS, 10) . I want to reduce the frame rate because I have a hand detector which take quite a lot of time to process each frame, I can not store frames in buffer since it would detect the hand in previous positions. I could run the detector using a timer or something else but I thought changing the fps was an easier way, but it didn't work and I don't know why.

我在Windows 8.1中使用带有Python 3.6.3的Opencv 3.4.2

Im using Opencv 3.4.2 with Python 3.6.3 in Windows 8.1

推荐答案

设置帧频并不总是如您所愿.这取决于两件事:

Setting a frame rate doesn't always work like you expect. It depends on two things:

  1. 您的相机能够输出什么.
  2. 您正在使用的当前捕获后端是否支持更改帧速率.

所以要指出(1).您的相机将具有可以传送到捕获设备(例如您的计算机)的格式列表.这可能是1920x1080 @ 30 fps或1920x1080 @ 60 fps,并且还指定了像素格式.绝大多数的消费类相机不允许您更精确地更改其帧速率.而且大多数捕获库都会拒绝更改相机不宣传的捕获格式.

So point (1). Your camera will have a list of formats which it is capable of delivering to a capture device (e.g. your computer). This might be 1920x1080 @ 30 fps or 1920x1080 @ 60 fps and it also specifies a pixel format. The vast majority of consumer cameras do not let you change their frame rates with any more granularity than that. And most capture libraries will refuse to change to a capture format that the camera isn't advertising.

即使是机器视觉相机,也可以让您进行更多控制,但通常仅提供多种帧速率(例如1、2、5、10、15、25、30等).如果要在硬件级别获得不支持的帧速率,通常唯一的方法是使用硬件触发.

Even machine vision cameras, which allow you much more control, typically only offer a selection of frame rates (e.g. 1, 2, 5, 10, 15, 25, 30, etc). If you want a non-supported frame rate at a hardware level, usually the only way to do it is to use hardware triggering.

然后指向(2).使用cv.VideoCapture时,实际上是在调用特定于平台的库,例如DirectShow或V4L2.我们称其为后端.您可以使用类似以下的命令来确切指定正在使用的后端:

And point (2). When you use cv.VideoCapture you're really calling a platform-specific library like DirectShow or V4L2. We call this a backend. You can specify exactly which backend is in use by using something like:

cv2.VideoCapture(0 + cv2.CAP_DSHOW)

有很多CAP_X的定义,但是只有少数适​​用于您的平台(例如CAP_V4L2仅适用于Linux).在Windows上,强制系统使用DirectShow是一个不错的选择.但是如上所述,如果您的相机仅报告它可以输出30fps和60fps,则请求10fps将毫无意义.更糟糕的是,许多设置在未实际实施时只是在OpenCV中报告True.您已经看到,虽然大多数时候读取参数都会给您有意义的结果,但是,如果未实现该参数(例如,曝光不是常见的参数),那么您可能会胡说八道.

There are lots of CAP_X's defined, but only some will apply to your platform (e.g CAP_V4L2 is for Linux only). On Windows, forcing the system to use DirectShow is a pretty good bet. However as above, if your camera only reports it can output 30fps and 60fps, requesting 10fps will be meaningless. Worse, a lot of settings simply report True in OpenCV when they're not actually implemented. You've seen that most of the time reading parameters will give you sensible results though, however if the parameter isn't implemented (e.g exposure is a common one that isn't) then you might get nonsense.

您最好等待一段时间,然后再读取最后一张图像.

You're better off waiting for a period of time and then reading the last image.

请谨慎使用此策略.不要这样做:

Be careful with this strategy. Don't do this:

while capturing:
    res, image = cap.read()
    time.sleep(1)

您需要确保不断清除相机的帧缓冲区,否则您将开始在视频中看到延迟.像下面这样的东西应该起作用:

you need to make sure you're continually purging the camera's frame buffer otherwise you will start to see lag in your videos. Something like the following should work:

frame_rate = 10
prev = 0

while capturing:

    time_elapsed = time.time() - prev
    res, image = cap.read()

    if time_elapsed > 1./frame_rate:
        prev = time.time()

        # Do something with your image here.
        process_image()

对于手持检测器之类的应用程序来说,比较有效的方法是让一个线程捕获图像,并且检测器在另一个线程中运行(该线程也控制GUI).检测器将提取最后捕获的图​​像,运行并显示结果(在读取/写入图像缓冲区时,可能需要锁定对图像缓冲区的访问权限).这样,您的瓶颈就是检测器,而不是相机的性能.

For an application like a hand detector, what works well is to have a thread capturing images and the detector running in another thread (which also controls the GUI). Your detector pulls the last image captured, runs and display the results (you may need to lock access to the image buffer while you're reading/writing it). That way your bottleneck is the detector, not the performance of the camera.

这篇关于在opencv 3.4.2中更改帧速率的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆