如何保存两个相机的数据,但不影响他们的图片获取速度? [英] How to save two camera's data but not influence their picture-acquire speed?

查看:1411
本文介绍了如何保存两个相机的数据,但不影响他们的图片获取速度?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我使用多光谱相机收集数据。一个是近红外,另一个是彩色。不是两个相机,而是一个相机可以同时获得两种不同类型的图像。有一些API函数我可以使用像J_Image_OpenStream。两部分核心代码如下所示。一个用于打开两个流(实际上他们是一个样本,我必须使用它们,但我不是他们的含义非常清楚地),并设置两个AVI文件的保存路径,并开始采集。

  //打开流
retval0 = J_Image_OpenStream(m_hCam [0],0,reinterpret_cast的< J_IMG_CALLBACK_OBJECT>(本),reinterpret_cast的< J_IMG_CALLBACK_FUNCTION> (& COpenCVSample1Dlg :: StreamCBFunc0),& m_hThread [0],(ViewSize0.cx * ViewSize0.cy * bpp0)/ 8);
if(retval0!= J_ST_SUCCESS){
AfxMessageBox(CString(无法打开stream0!),MB_OK | MB_ICONEXCLAMATION);
return;
}
TRACE(Opening stream0 succeeded\\\
);
retval1 = J_Image_OpenStream(m_hCam [1],0,的reinterpret_cast&所述; J_IMG_CALLBACK_OBJECT>(这),的reinterpret_cast&所述; J_IMG_CALLBACK_FUNCTION>(&放大器; COpenCVSample1Dlg :: StreamCBFunc1),放大器; m_hThread [1],(ViewSize1.cx * ViewSize1。 cy * bpp1)/ 8);
if(retval1!= J_ST_SUCCESS){
AfxMessageBox(CString(无法打开stream1!),MB_OK | MB_ICONEXCLAMATION);
return;
}
TRACE(Opening stream1 succeeded\\\
);

const char * filename0 =C:\\Users\\shenyang\\Desktop\\\test0.avi;
const char * filename1 =C:\\Users\\shenyang\\Desktop\\ test1.avi;
int fps = 10; //帧每秒
int codec = -1; //选择压缩方法

writer0 = cvCreateVideoWriter(filename0,codec,fps,CvSize(1296,966),1);
writer1 = cvCreateVideoWriter(filename1,codec,fps,CvSize(1296,964),1);

//开始获取
retval0 = J_Camera_ExecuteCommand(m_hCam [0],NODE_NAME_ACQSTART);
retval1 = J_Camera_ExecuteCommand(m_hCam [1],NODE_NAME_ACQSTART);


//创建两个名为Windows的OpenCV,用于显示BGR和INFRARED图像
cvNamedWindow(BGR);
cvNamedWindow(INFRARED);

另一个是两个流函数,它们看起来很相似。

  void COpenCVSample1Dlg :: StreamCBFunc0(J_tIMAGE_INFO * pAqImageInfo)
{
if(m_pImg0 == NULL)
{
//创建映像:
//我们认为这是在此示例
m_pImg0 = cvCreateImage(cvSize(pAqImageInfo-> 8位黑白图像; iSizeX,pAqImageInfo-> iSizeY),IPL_DEPTH_8U ,1);
}

//从采集引擎图像缓冲区拷贝数据到OpenCV的图像obejct
的memcpy(m_pImg0->的imageData,pAqImageInfo-> pImageBuffer,m_pImg0-> ;图片大小);

//在BGR窗口中显示
cvShowImage(INFRARED,m_pImg0);

frame0 = m_pImg0;
cvWriteFrame(writer0,frame0);

}

void COpenCVSample1Dlg :: StreamCBFunc1(J_tIMAGE_INFO * pAqImageInfo)
{
if(m_pImg1 == NULL)
{
//创建映像:
//我们认为这是在此示例
m_pImg1 = cvCreateImage 8位黑白图像(cvSize(pAqImageInfo-> iSizeX,pAqImageInfo-> iSizeY) IPL_DEPTH_8U,1);
}

//从采集引擎图像缓冲区拷贝数据到OpenCV的图像obejct
的memcpy(m_pImg1->的imageData,pAqImageInfo-> pImageBuffer,m_pImg1-> ;图片大小);

//在BGR窗口中显示
cvShowImage(BGR,m_pImg1);

frame1 = m_pImg1;
cvWriteFrame(writer1,frame1);问题是如果我不保存avi文件,因为

  / * writer0 = cvCreateVideoWriter(filename0,codec,fps,CvSize(1296,966),1); 
writer1 = cvCreateVideoWriter(filename1,codec,fps,CvSize(1296,964),1); * /
// cvWriteFrame(writer0,frame0);
// cvWriteFrame(writer0,frame0);

在两个显示窗口中,捕获的图像类似,这意味着它们是同步的。但是如果我必须写数据到avi文件,由于不同大小的2种图片以及其大尺寸的,事实证明,这种影响有两个摄像头的采集速度和图片拍摄的是不同步的。但我不能创建这样一个巨大的缓冲区来存储整个数据在内存和I / O设备是相当慢。我该怎么办?非常非常感谢。



一些类变量是:

  public:
FACTORY_HANDLE m_hFactory; // Factory Handle
CAM_HANDLE m_hCam [MAX_CAMERAS]; // Camera Handles
THRD_HANDLE m_hThread [MAX_CAMERAS]; // Stream handles
char m_sCameraId [MAX_CAMERAS] [J_CAMERA_ID_SIZE]; // Camera IDs

IplImage * m_pImg0 = NULL; // OpenCV Images
IplImage * m_pImg1 = NULL; // OpenCV Images

CvVideoWriter * writer0;
IplImage * frame0;
CvVideoWriter * writer1;
IplImage * frame1;

BOOL OpenFactoryAndCamera();
void CloseFactoryAndCamera();
void StreamCBFunc0(J_tIMAGE_INFO * pAqImageInfo);
void StreamCBFunc1(J_tIMAGE_INFO * pAqImageInfo);
void InitializeControls();
void EnableControls(BOOL bIsCameraReady,BOOL bIsImageAcquiring);


解决方案

以隔离两个任务(帧获取和帧串行化),使得它们不彼此影响(具体地,使得串行化中的波动不会消除捕获帧的时间,这必须在没有延迟的情况下发生以防止帧丢失)。



这可以通过将序列化(编码帧并将它们写入视频文件)委托给不同的线程来实现,并使用某种同步队列将数据提供给工作线程。



下面是一个简单的例子,显示如何做到这一点。由于我只有一个相机,而不是你的种类,我将简单地使用网络摄像头和复制框架,但一般原则也适用于您的方案。






示例代码



在开始我们有一些包括:

  #include< opencv2 / opencv.hpp> 

#include< chrono>
#include< condition_variable>
#include< iostream>
#include< mutex>
#include< queue>
#include< thread>
// ========================================== ==============================
使用std :: chrono :: high_resolution_clock;
using std :: chrono :: duration_cast;
using std :: chrono :: microseconds;
// ========================================== ==============================






同步队列



第一步是定义我们的同步队列我们将使用与编写视频的工作线程进行通信。



我们需要的主要功能是:




  • 将新图片推入队列

  • 从队列中弹出图片,等待其为空。




我们使用 std :: queue 以保存 cv :: Mat 实例和 std :: mutex 以提供同步。 std :: condition_variable 用于通知消费者图像已插入队列(或设置取消标志),并且使用简单的布尔标志通知取消。



最后,我们使用空的 struct cancelled 作为从 pop()抛出的异常,所以我们可以干净地终止worker取消队列。

  // ==================== ==================================================== ====== 
class frame_queue
{
public:
struct cancelled {};

public:
frame_queue();

void push(cv :: Mat const& image);
cv :: Mat pop();

void cancel();

private:
std :: queue< cv :: Mat>队列_;
std :: mutex mutex_;
std :: condition_variable cond_;
bool cancelled_;
};
// -------------------------------------------- --------------------------------
frame_queue :: frame_queue()
:cancelled_(false )
{
}
// --------------------------------- -------------------------------------------
void frame_queue: :cancel()
{
std :: unique_lock< std :: mutex> mlock(mutex_);
cancelled_ = true;
cond_.notify_all();
}
// --------------------------------------- -------------------------------------
void frame_queue :: push(cv :: Mat const& image)
{
std :: unique_lock< std :: mutex> mlock(mutex_);
queue_.push(image);
cond_.notify_one();
}
// --------------------------------------- -------------------------------------
cv :: Mat frame_queue :: pop( )
{
std :: unique_lock< std :: mutex> mlock(mutex_);

while(queue_.empty()){
if(cancelled_){
throw canceled();
}
cond_.wait(mlock);
if(cancelled_){
throw canceled();
}
}

cv :: Mat image(queue_.front());
queue_.pop();
return image;
}
// =================================== ===================================






存储员



下一步是定义一个简单的 storage_worker ,它将负责从同步队列中获取帧,并将它们编码为视频文件,直到队列被取消。



我添加了简单的计时,所以我们有一些想法花费在编码框架,以及简单的日志控制台花费了多少时间,所以我们有一些想法,程序。

  // ====================== ==================================================== ==== 
class storage_worker
{
public:
storage_worker(frame_queue& queue
,int32_t id
,std :: string const& file_name
,int32_t fourcc
,double fps
,cv :: Size frame_size
,bool is_color = true);

void run();

double total_time_ms()const {return total_time_ / 1000.0; }

private:
frame_queue&队列_;

int32_t id_;

std :: string file_name_;
int32_t fourcc_;
double fps_;
cv :: Size frame_size_;
bool is_color_;

double total_time_;
};
// -------------------------------------------- --------------------------------
storage_worker :: storage_worker(frame_queue& queue
,int32_t id
,std :: string const& file_name
,int32_t fourcc
,double fps
,cv :: Size frame_size
,bool is_color)
: queue_(queue)
,id_(id)
,file_name_(file_name)
,fourcc_(fourcc)
,fps_(fps)
,frame_size_ b $ b,is_color_(is_color)
,total_time_(0.0)
{
}
// ----------------- -------------------------------------------------- ---------
void storage_worker :: run()
{
cv :: VideoWriter writer(file_name_,fourcc_,fps_,frame_size_,is_color_);

try {
int32_t frame_count(0);
for(;;){
cv :: Mat image(queue_.pop());
if(!image.empty()){
high_resolution_clock :: time_point t1(high_resolution_clock :: now());

++ frame_count;
writer.write(image);

high_resolution_clock :: time_point t2(high_resolution_clock :: now());
double dt_us(static_cast< double>(duration_cast< microseconds>(t2-t1).count()))
total_time_ + = dt_us;

std :: cout<< Worker<< id_<< 存储图像#<< frame_count
<< in< (dt_us / 1000.0) ms< std :: endl;
}
}
} catch(frame_queue :: cancelled& / * e * /){
//没有更多要处理,我们完成
std: :cout< 队列<< id_<< 取消,工人完成。 << std :: endl;
}
}
// ================================== ========================================






处理



我们可以把这一切放在一起。



我们从初始化和配置我们的视频源开始。然后,我们创建两个 frame_queue 实例,每个图像流一个。我们通过创建 storage_worker 的两个实例,每个队列一个。



下一步是创建并启动工作线程,它将执行每个 storage_worker 的run()方法。让我们的消费者准备好了,我们可以开始从相机捕获帧,并将它们馈送到 frame_queue 实例。如上所述,我只有单一来源,因此我插入相同帧的副本到两个队列。



注意: 克隆()方法 cv :: Mat 执行深拷贝,否则将插入引用单缓冲OpenCV VideoCapture 出于性能原因而使用。这意味着工作线程将获得对这个单个映像的引用,并且将不存在用于访问该共享映像缓冲器的同步。



一旦我们读取了适当的帧数(你可以实现任何其他类型的停止条件你希望),我们取消工作队列,并等待工作线程完成。



最后,我们写一些关于不同任务所需时间的统计。 p>

  // ======================== ==================================================== 
int main()
{
//视频源 - 对我来说这是一个摄像头,你使用你的摄像头API
//我只有一个摄像头,所以我只是复制框架来模拟你的场景
cv :: VideoCapture capture(0);

//让我们的大小合适,因为我的相机默认为640x480
capture.set(CV_CAP_PROP_FRAME_WIDTH,1920);
capture.set(CV_CAP_PROP_FRAME_HEIGHT,1080);
capture.set(CV_CAP_PROP_FPS,20.0);

//获取实际值,以便我们可以正确地创建视频
int32_t frame_width(static_cast int32_t frame_height(static_cast< int32_t>(capture.get(CV_CAP_PROP_FRAME_HEIGHT)));
double video_fps(std :: max(10.0,capture.get(CV_CAP_PROP_FPS))); //一些默认情况下为0

std :: cout<< Capturing images(<<< frame_width<<x<< frame_height
<<)at< video_fps<< FPS。 << std :: endl;

//同步队列,每个视频源/存储器工作者对一个
std :: vector< frame_queue>队列(2);

//让我们创建我们的存储工作者 - 让我们有两个模拟你的场景
//并保持有趣,每个人写一个不同的格式
std: :vector< storage_worker>存储;
storage.emplace_back(std :: ref(queue [0]),0
,std :: string(foo_0.avi)
,CV_FOURCC('I','Y' ,'U','V')
,video_fps
,cv :: Size(frame_width,frame_height)
,true);

storage.emplace_back(std :: ref(queue [1]),1
,std :: string(foo_1.avi)
,CV_FOURCC ,'I','V','X')
,video_fps
,cv :: Size(frame_width,frame_height)
,true);

//为每个存储worker启动工作线程
std :: vector< std :: thread> storage_thread;
for(auto& s:storage){
storage_thread.emplace_back(& storage_worker :: run,& s);
}

//现在主捕获循环
int32_t const MAX_FRAME_COUNT(10);
double total_read_time(0.0);
int32_t frame_count(0);
for(; frame_count< MAX_FRAME_COUNT; ++ frame_count){
high_resolution_clock :: time_point t1(high_resolution_clock :: now());

//尝试读取框架
cv :: Mat image;
if(!capture.read(image)){
std :: cerr<< 无法捕获image.\\\
;
break;
}

//在所有队列中插入一个副本
for(auto& q:queue){
q.push(image.clone());
}

high_resolution_clock :: time_point t2(high_resolution_clock :: now());
double dt_us(static_cast< double>(duration_cast< microseconds>(t2-t1).count()))
total_read_time + = dt_us;

std :: cout<< 捕获的图像#< frame_count<< in
<< (dt_us / 1000.0) ms< std :: endl;
}

//我们读完了,取消所有的队列
for(auto& q:queue){
q.cancel
}

//加入所有的工作线程,等待他们完成
for(auto& st:storage_thread){
st.join
}

if(frame_count == 0){
std :: cerr< 没有帧捕获。
return -1;
}

//报告计时
total_read_time / = 1000.0;
double total_write_time_a(storage [0] .total_time_ms());
double total_write_time_b(storage [1] .total_time_ms());

std :: cout<< 完成处理< frame_count<< images:\\\

<< 平均捕获时间=< (total_read_time / frame_count)<< ms\\\

<< 平均写入时间A =< (total_write_time_a / frame_count)< ms\\\

<< 平均写入时间B =< (total_write_time_b / frame_count)< ms\\\
;

return 0;
}
// =================================== ===================================






控制台输出



运行这个小样本在控制台中获取以下日志输出以及磁盘上的两个视频文件。



注意:

 捕获图像(1920x1080)在20 FPS。 
捕获的图像#0在111.009 ms
捕获的图像#1在67.066 ms
工作者0存储图像#1在94.087 ms
捕获的图像#2在62.059 ms
工人1在193.186 ms中存储图像#1
在60.059 ms中捕获图像#3
工人0存储图像#2在100.097 ms
捕获的图像#4在78.075 ms
工人0存储的图像#3在87.085毫秒
在62.061毫秒中捕获的图像#5
工人0存储的图像#4在95.092毫秒
工人1存储的图像#2在193.187毫秒
捕获的图像#6 in 75.074 ms
工作者0存储映像#5在95.093 ms
捕获的映像#7在63.061 ms
捕获映像#8在64.061 ms
工作者0存储映像#6 in 102.098 ms
工作者1存储的映像#3在201.195 ms
捕获的映像#9在76.074 ms
工作者0存储映像#7在90.089 ms
工作者0存储映像#8 in 91.087 ms
工作者1存储的映像#4在185.18 ms
工作者0存储映像#9在82.08 ms
工作者0存储映像#10在94.092 ms
队列0取消,工人完成。
工作者1存储的映像#5在179.174 ms
工作者1存储映像#6在106.102 ms
工作者1存储映像#7在105.104 ms
工作者1存储的映像#8 103.101 ms
工人1存储的映像#9在104.102 ms
工作程序1存储映像#10在104.1 ms
队列1取消,工人完成。
完成处理10个图像:
平均捕获时间= 71.8599 ms
平均写入时间A = 93.09 ms
平均写入时间B = 147.443 ms
平均写入时间B = 176.673 ms






可能的改进



目前没有防止队列过满的情况下,序列化根本无法跟上相机生成新的图像的速率。设置队列大小的一些上限,并在推送框架之前检查生产者。您需要决定如何处理这种情况。


I am using a multispectral camera to collect data. One is near-infrared and another is colorful. Not two cameras, but one camera can obtain two different kinds of images in the same time. There are some API functions I could use like J_Image_OpenStream. Two part of core codes are shown as follows. One is used to open two streams(actually they are in one sample and I have to use them, but I am not too clearly with their meanings) and set the two avi files' saving paths and begin the acquisition.

 // Open stream
 retval0 = J_Image_OpenStream(m_hCam[0], 0, reinterpret_cast<J_IMG_CALLBACK_OBJECT>(this), reinterpret_cast<J_IMG_CALLBACK_FUNCTION>(&COpenCVSample1Dlg::StreamCBFunc0), &m_hThread[0], (ViewSize0.cx*ViewSize0.cy*bpp0)/8);
if (retval0 != J_ST_SUCCESS) {
    AfxMessageBox(CString("Could not open stream0!"), MB_OK | MB_ICONEXCLAMATION);
    return;
}
TRACE("Opening stream0 succeeded\n");
retval1 = J_Image_OpenStream(m_hCam[1], 0, reinterpret_cast<J_IMG_CALLBACK_OBJECT>(this), reinterpret_cast<J_IMG_CALLBACK_FUNCTION>(&COpenCVSample1Dlg::StreamCBFunc1), &m_hThread[1], (ViewSize1.cx*ViewSize1.cy*bpp1)/8);
if (retval1 != J_ST_SUCCESS) {
    AfxMessageBox(CString("Could not open stream1!"), MB_OK | MB_ICONEXCLAMATION);
    return;
}
TRACE("Opening stream1 succeeded\n");

const char *filename0 = "C:\\Users\\shenyang\\Desktop\\test0.avi"; 
const char *filename1 = "C:\\Users\\shenyang\\Desktop\\test1.avi";
int fps = 10; //frame per second
int codec = -1;//choose the compression method

writer0 = cvCreateVideoWriter(filename0, codec, fps, CvSize(1296,966), 1);
writer1 = cvCreateVideoWriter(filename1, codec, fps, CvSize(1296,964), 1);

// Start Acquision
retval0 = J_Camera_ExecuteCommand(m_hCam[0], NODE_NAME_ACQSTART);
retval1 = J_Camera_ExecuteCommand(m_hCam[1], NODE_NAME_ACQSTART);


// Create two OpenCV named Windows used for displaying "BGR" and "INFRARED" images
cvNamedWindow("BGR");
cvNamedWindow("INFRARED");

Another one is the two stream functions, they look very similar.

void COpenCVSample1Dlg::StreamCBFunc0(J_tIMAGE_INFO * pAqImageInfo)
{
if (m_pImg0 == NULL)
{
    // Create the Image:
    // We assume this is a 8-bit monochrome image in this sample
    m_pImg0 = cvCreateImage(cvSize(pAqImageInfo->iSizeX, pAqImageInfo->iSizeY), IPL_DEPTH_8U, 1);
}

// Copy the data from the Acquisition engine image buffer into the OpenCV Image obejct
memcpy(m_pImg0->imageData, pAqImageInfo->pImageBuffer, m_pImg0->imageSize);

// Display in the "BGR" window
cvShowImage("INFRARED", m_pImg0);

frame0 = m_pImg0;
cvWriteFrame(writer0, frame0);

}

void COpenCVSample1Dlg::StreamCBFunc1(J_tIMAGE_INFO * pAqImageInfo)
{
if (m_pImg1 == NULL)
{
    // Create the Image:
    // We assume this is a 8-bit monochrome image in this sample
    m_pImg1 = cvCreateImage(cvSize(pAqImageInfo->iSizeX, pAqImageInfo->iSizeY), IPL_DEPTH_8U, 1);
}

// Copy the data from the Acquisition engine image buffer into the OpenCV Image obejct
memcpy(m_pImg1->imageData, pAqImageInfo->pImageBuffer, m_pImg1->imageSize);

// Display in the "BGR" window
cvShowImage("BGR", m_pImg1);

frame1 = m_pImg1;
cvWriteFrame(writer1, frame1);
}

The question is if I do not save the avi files, as

/*writer0 = cvCreateVideoWriter(filename0, codec, fps, CvSize(1296,966), 1);
writer1 = cvCreateVideoWriter(filename1, codec, fps, CvSize(1296,964), 1);*/
//cvWriteFrame(writer0, frame0);
//cvWriteFrame(writer0, frame0);

In the two display windows, the pictures captured like similarly which means they are synchronous. But if I have to write data to the avi files, due to the different size of two kinds of pictures and their large size, it turns out that this influence the two camera's acquire speed and pictures captured are non-synchronous. But I could not create such a huge buffer to store the whole data in the memory and the I/O device is rather slow. What should I do? Thank you very very much.

some class variables are:

 public:
FACTORY_HANDLE  m_hFactory;             // Factory Handle
CAM_HANDLE      m_hCam[MAX_CAMERAS];    // Camera Handles
THRD_HANDLE     m_hThread[MAX_CAMERAS]; // Stream handles
char            m_sCameraId[MAX_CAMERAS][J_CAMERA_ID_SIZE]; // Camera IDs

IplImage        *m_pImg0 = NULL;        // OpenCV Images
IplImage        *m_pImg1 = NULL;        // OpenCV Images

CvVideoWriter* writer0;
IplImage *frame0;
CvVideoWriter* writer1;
IplImage *frame1;

BOOL OpenFactoryAndCamera();
void CloseFactoryAndCamera();
void StreamCBFunc0(J_tIMAGE_INFO * pAqImageInfo);
void StreamCBFunc1(J_tIMAGE_INFO * pAqImageInfo);
void InitializeControls();
void EnableControls(BOOL bIsCameraReady, BOOL bIsImageAcquiring);

解决方案

The correct approach at recording the video without frame drops is to isolate the two tasks (frame acquisition, and frame serialization) such that they don't influence each other (specifically so that fluctuations in serialization don't eat away time from capturing the frames, which has to happen without delays to prevent frame loss).

This can be achieved by delegating the serialization (encoding of the frames and writing them into a video file) to separate threads, and using some kind of synchronized queue to feed the data to the worker threads.

Following is a simple example showing how this could be done. Since I only have one camera and not the kind you have, I will simply use a webcam and duplicate the frames, but the general principle applies to your scenario as well.


Sample Code

In the beginning we have some includes:

#include <opencv2/opencv.hpp>

#include <chrono>
#include <condition_variable>
#include <iostream>
#include <mutex>
#include <queue>
#include <thread>
// ============================================================================
using std::chrono::high_resolution_clock;
using std::chrono::duration_cast;
using std::chrono::microseconds;
// ============================================================================


Synchronized Queue

The first step is to define our synchronized queue, which we will use to communicate with the worker threads that write the video.

The primary functions we need is the ability to:

  • Push new images into a the queue
  • Pop images from the queue, waiting when it's empty.
  • Ability to cancel all pending pops, when we're finished.

We use std::queue to hold the cv::Mat instances, and std::mutex to provide synchronization. A std::condition_variable is used to notify the consumer when image has been inserted into the queue (or the cancellation flag set), and a simple boolean flag is used to notify cancellation.

Finally, we use the empty struct cancelled as an exception thrown from pop(), so we can cleanly terminate the worker by cancelling the queue.

// ============================================================================
class frame_queue
{
public:
    struct cancelled {};

public:
    frame_queue();

    void push(cv::Mat const& image);
    cv::Mat pop();

    void cancel();

private:
    std::queue<cv::Mat> queue_;
    std::mutex mutex_;
    std::condition_variable cond_;
    bool cancelled_;
};
// ----------------------------------------------------------------------------
frame_queue::frame_queue()
    : cancelled_(false)
{
}
// ----------------------------------------------------------------------------
void frame_queue::cancel()
{
    std::unique_lock<std::mutex> mlock(mutex_);
    cancelled_ = true;
    cond_.notify_all();
}
// ----------------------------------------------------------------------------
void frame_queue::push(cv::Mat const& image)
{
    std::unique_lock<std::mutex> mlock(mutex_);
    queue_.push(image);
    cond_.notify_one();
}
// ----------------------------------------------------------------------------
cv::Mat frame_queue::pop()
{
    std::unique_lock<std::mutex> mlock(mutex_);

    while (queue_.empty()) {
        if (cancelled_) {
            throw cancelled();
        }
        cond_.wait(mlock);
        if (cancelled_) {
            throw cancelled();
        }
    }

    cv::Mat image(queue_.front());
    queue_.pop();
    return image;
}
// ============================================================================


Storage Worker

The next step is to define a simple storage_worker, which will be responsible for taking the frames from the synchronized queue, and encode them into a video file until the queue has been cancelled.

I've added simple timing, so we have some idea about how much time is spent encoding the frames, as well as simple logging to console, so we have some idea about what is happening in the program.

// ============================================================================
class storage_worker
{
public:
    storage_worker(frame_queue& queue
        , int32_t id
        , std::string const& file_name
        , int32_t fourcc
        , double fps
        , cv::Size frame_size
        , bool is_color = true);

    void run();

    double total_time_ms() const { return total_time_ / 1000.0; }

private:
    frame_queue& queue_;

    int32_t id_;

    std::string file_name_;
    int32_t fourcc_;
    double fps_;
    cv::Size frame_size_;
    bool is_color_;

    double total_time_;
};
// ----------------------------------------------------------------------------
storage_worker::storage_worker(frame_queue& queue
    , int32_t id
    , std::string const& file_name
    , int32_t fourcc
    , double fps
    , cv::Size frame_size
    , bool is_color)
    : queue_(queue)
    , id_(id)
    , file_name_(file_name)
    , fourcc_(fourcc)
    , fps_(fps)
    , frame_size_(frame_size)
    , is_color_(is_color)
    , total_time_(0.0)
{
}
// ----------------------------------------------------------------------------
void storage_worker::run()
{
    cv::VideoWriter writer(file_name_, fourcc_, fps_, frame_size_, is_color_);

    try {
        int32_t frame_count(0);
        for (;;) {
            cv::Mat image(queue_.pop());
            if (!image.empty()) {
                high_resolution_clock::time_point t1(high_resolution_clock::now());

                ++frame_count;
                writer.write(image);

                high_resolution_clock::time_point t2(high_resolution_clock::now());
                double dt_us(static_cast<double>(duration_cast<microseconds>(t2 - t1).count()));
                total_time_ += dt_us;

                std::cout << "Worker " << id_ << " stored image #" << frame_count
                    << " in " << (dt_us / 1000.0) << " ms" << std::endl;
            }
        }
    } catch (frame_queue::cancelled& /*e*/) {
        // Nothing more to process, we're done
        std::cout << "Queue " << id_ << " cancelled, worker finished." << std::endl;
    }
}
// ============================================================================


Processing

Finally, we can put this all together.

We begin by initializing and configuring our video source. Then we create two frame_queue instances, one for each stream of images. We follow this by creating two instances of storage_worker, one for each queue. To keep things interesting, I've set a different codec for each.

Next step is to create and start worker threads, which will execute the run() method of each storage_worker. Having our consumers ready, we can start capturing frames from the camera, and feed them to the frame_queue instances. As mentioned above, I have only single source, so I insert copies of the same frame into both queues.

NB: I need to use the clone() method of cv::Mat to do a deep copy, otherwise I would be inserting references to the single buffer OpenCV VideoCapture uses for performance reasons. That would mean that the worker threads would be getting references to this single image, and there would be no synchronization for access to this shared image buffer. You need to make sure this does not happen in your scenario as well.

Once we have read the appropriate number of frames (you can implement any other kind of stop-condition you desire), we cancel the work queues, and wait for the worker threads to complete.

Finally we write some statistics about the time required for the different tasks.

// ============================================================================
int main()
{
    // The video source -- for me this is a webcam, you use your specific camera API instead
    // I only have one camera, so I will just duplicate the frames to simulate your scenario
    cv::VideoCapture capture(0);

    // Let's make it decent sized, since my camera defaults to 640x480
    capture.set(CV_CAP_PROP_FRAME_WIDTH, 1920);
    capture.set(CV_CAP_PROP_FRAME_HEIGHT, 1080);
    capture.set(CV_CAP_PROP_FPS, 20.0);

    // And fetch the actual values, so we can create our video correctly
    int32_t frame_width(static_cast<int32_t>(capture.get(CV_CAP_PROP_FRAME_WIDTH)));
    int32_t frame_height(static_cast<int32_t>(capture.get(CV_CAP_PROP_FRAME_HEIGHT)));
    double video_fps(std::max(10.0, capture.get(CV_CAP_PROP_FPS))); // Some default in case it's 0

    std::cout << "Capturing images (" << frame_width << "x" << frame_height
        << ") at " << video_fps << " FPS." << std::endl;

    // The synchronized queues, one per video source/storage worker pair
    std::vector<frame_queue> queue(2);

    // Let's create our storage workers -- let's have two, to simulate your scenario
    // and to keep it interesting, have each one write a different format
    std::vector <storage_worker> storage;
    storage.emplace_back(std::ref(queue[0]), 0
        , std::string("foo_0.avi")
        , CV_FOURCC('I', 'Y', 'U', 'V')
        , video_fps
        , cv::Size(frame_width, frame_height)
        , true);

    storage.emplace_back(std::ref(queue[1]), 1
        , std::string("foo_1.avi")
        , CV_FOURCC('D', 'I', 'V', 'X')
        , video_fps
        , cv::Size(frame_width, frame_height)
        , true);

    // And start the worker threads for each storage worker
    std::vector<std::thread> storage_thread;
    for (auto& s : storage) {
        storage_thread.emplace_back(&storage_worker::run, &s);
    }

    // Now the main capture loop
    int32_t const MAX_FRAME_COUNT(10);
    double total_read_time(0.0);
    int32_t frame_count(0);
    for (; frame_count < MAX_FRAME_COUNT; ++frame_count) {
        high_resolution_clock::time_point t1(high_resolution_clock::now());

        // Try to read a frame
        cv::Mat image;
        if (!capture.read(image)) {
            std::cerr << "Failed to capture image.\n";
            break;
        }

        // Insert a copy into all queues
        for (auto& q : queue) {
            q.push(image.clone());
        }        

        high_resolution_clock::time_point t2(high_resolution_clock::now());
        double dt_us(static_cast<double>(duration_cast<microseconds>(t2 - t1).count()));
        total_read_time += dt_us;

        std::cout << "Captured image #" << frame_count << " in "
            << (dt_us / 1000.0) << " ms" << std::endl;
    }

    // We're done reading, cancel all the queues
    for (auto& q : queue) {
        q.cancel();
    }

    // And join all the worker threads, waiting for them to finish
    for (auto& st : storage_thread) {
        st.join();
    }

    if (frame_count == 0) {
        std::cerr << "No frames captured.\n";
        return -1;
    }

    // Report the timings
    total_read_time /= 1000.0;
    double total_write_time_a(storage[0].total_time_ms());
    double total_write_time_b(storage[1].total_time_ms());

    std::cout << "Completed processing " << frame_count << " images:\n"
        << "  average capture time = " << (total_read_time / frame_count) << " ms\n"
        << "  average write time A = " << (total_write_time_a / frame_count) << " ms\n"
        << "  average write time B = " << (total_write_time_b / frame_count) << " ms\n";

    return 0;
}
// ============================================================================


Console Output

Running this little sample, we get the following log output in the console, as well as the two video files on the disk.

NB: Since this was actually encoding a lot faster than capturing, I've added some wait into the storage_worker to show the separation better.

Capturing images (1920x1080) at 20 FPS.
Captured image #0 in 111.009 ms
Captured image #1 in 67.066 ms
Worker 0 stored image #1 in 94.087 ms
Captured image #2 in 62.059 ms
Worker 1 stored image #1 in 193.186 ms
Captured image #3 in 60.059 ms
Worker 0 stored image #2 in 100.097 ms
Captured image #4 in 78.075 ms
Worker 0 stored image #3 in 87.085 ms
Captured image #5 in 62.061 ms
Worker 0 stored image #4 in 95.092 ms
Worker 1 stored image #2 in 193.187 ms
Captured image #6 in 75.074 ms
Worker 0 stored image #5 in 95.093 ms
Captured image #7 in 63.061 ms
Captured image #8 in 64.061 ms
Worker 0 stored image #6 in 102.098 ms
Worker 1 stored image #3 in 201.195 ms
Captured image #9 in 76.074 ms
Worker 0 stored image #7 in 90.089 ms
Worker 0 stored image #8 in 91.087 ms
Worker 1 stored image #4 in 185.18 ms
Worker 0 stored image #9 in 82.08 ms
Worker 0 stored image #10 in 94.092 ms
Queue 0 cancelled, worker finished.
Worker 1 stored image #5 in 179.174 ms
Worker 1 stored image #6 in 106.102 ms
Worker 1 stored image #7 in 105.104 ms
Worker 1 stored image #8 in 103.101 ms
Worker 1 stored image #9 in 104.102 ms
Worker 1 stored image #10 in 104.1 ms
Queue 1 cancelled, worker finished.
Completed processing 10 images:
  average capture time = 71.8599 ms
  average write time A = 93.09 ms
  average write time B = 147.443 ms
  average write time B = 176.673 ms


Possible Improvements

Currently there is no protection against the queue getting too full in the situation when the serialization simply can't keep up with the rate the camera generates new images. Set some upper limit for the queue size, and check in the producer before you push the frame. You will need to decide how exactly you want to handle this situation.

这篇关于如何保存两个相机的数据,但不影响他们的图片获取速度?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆