OpenCV C ++中用于跟踪对象的背景减法和光流 [英] Background subtraction and Optical flow for tracking object in OpenCV C++

查看:114
本文介绍了OpenCV C ++中用于跟踪对象的背景减法和光流的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在一个项目中,使用背景减法检测感兴趣的对象,并使用OpenCV C ++中的光流对其进行跟踪.使用背景减法,我能够检测到感兴趣的对象.我能够在单独的程序上实现OpenCV Lucas Kanade光流.但是,我被困在如何将这两个程序合并到一个程序中. frame1保留视频中的实际帧,contours2是从前景对象中选择的轮廓.

总而言之,如何将通过背景减法获得的前景物体馈送到calcOpticalFlowPyrLK?或者,如果我的方法有误,请帮助我.预先谢谢你.

Mat mask = Mat::zeros(fore.rows, fore.cols, CV_8UC1);
    drawContours(mask, contours2, -1, Scalar(255), 4, CV_FILLED);

    if (first_frame)
    {
        goodFeaturesToTrack(mask, features_next, 1000, 0.01, 10, noArray(), 3, false, 0.04);
        fm0 = mask.clone();
        features_prev = features_next;
        first_frame = false;
    }
    else
    {           
        features_next.clear();
        if (!features_prev.empty())
        {
            calcOpticalFlowPyrLK(fm0, mask, features_prev, features_next, featuresFound, err, winSize, 3, termcrit, 0, 0.001);
            for (int i = 0; i < features_prev.size(); i++)
                line(frame1, features_prev[i], features_next[i], CV_RGB(0, 0, 255), 1, 8);
            imshow("final optical", frame1);
            waitKey(1);
        }
        goodFeaturesToTrack(mask, features_next, 1000, 0.01, 10, noArray(), 3, false, 0.04);
        features_prev = features_next;
        fm0 = mask.clone();         
    }

解决方案

您使用光流进行跟踪的方法是错误的.光流方法背后的想法是,两个连续图像中的运动点在起点和终点都具有相同的像素强度.这意味着通过从开始图像观察其外观并在结束图像中搜索其结构(非常简化)来估计其运动.

calcOpticalFlowPyrLK是一个点跟踪器,这意味着将先前图像中的点跟踪到当前图像.因此,这些方法需要系统的原始灰度图像.因为它只能估计结构化/纹理区域上的运动(您的图像需要x和y渐变).

我认为您的代码应该做些类似的事情:

  1. 通过背景减法(按轮廓)提取对象,这在文献中称为斑点(blob)
  2. 在下一张图像中提取对象并应用Blob关联(该计数属于谁),这也称为Blob跟踪 可以使用calcOpticalFlowPyrLK进行Blob跟踪.例如.以一种非常简单的方式:
  3. 跟踪来自国家的点或斑点内的点.
  4. 关联:如果属于先前轮廓的点轨迹位于当前国家/地区,则先前轮廓是当前轮廓之一

I am working on a project to detect object of interest using background subtraction and track them using optical flow in OpenCV C++. I was able to detect the object of interest using background subtraction. I was able to implement OpenCV Lucas Kanade optical flow on separate program. But, I am stuck at how to these two program in a single program. frame1 holds the actual frame from the video, contours2are the selected contours from the foreground object.

To summarize, how do I feed the forground object obtained from Background subtraction method to the calcOpticalFlowPyrLK? Or, help me if my approach is wrong. Thank you in advance.

Mat mask = Mat::zeros(fore.rows, fore.cols, CV_8UC1);
    drawContours(mask, contours2, -1, Scalar(255), 4, CV_FILLED);

    if (first_frame)
    {
        goodFeaturesToTrack(mask, features_next, 1000, 0.01, 10, noArray(), 3, false, 0.04);
        fm0 = mask.clone();
        features_prev = features_next;
        first_frame = false;
    }
    else
    {           
        features_next.clear();
        if (!features_prev.empty())
        {
            calcOpticalFlowPyrLK(fm0, mask, features_prev, features_next, featuresFound, err, winSize, 3, termcrit, 0, 0.001);
            for (int i = 0; i < features_prev.size(); i++)
                line(frame1, features_prev[i], features_next[i], CV_RGB(0, 0, 255), 1, 8);
            imshow("final optical", frame1);
            waitKey(1);
        }
        goodFeaturesToTrack(mask, features_next, 1000, 0.01, 10, noArray(), 3, false, 0.04);
        features_prev = features_next;
        fm0 = mask.clone();         
    }

解决方案

Your approach of using optical flow for tracking is wrong. The idea behind optical flow approach is that a movning point in two consequtive images has at the start and endpoint the same pixel intensity. That means a motion for a feautre is estimated by observing its appearance from the start images and search for the structure in the end image (very simplified).

calcOpticalFlowPyrLK is a point tracker that means point in the previous images are tracked to the current one. Therefore the methods need the original gray valued image of your system. Because it only can estimate motion on structured / textured region ( you need x and y gradients in your image).

I think your code should do somethink like:

  1. Extract objects by background substraction (by contour) this is in the literature called a blob
  2. Extract objects in the next image and apply a blob-assoziation (which countour belong to whom) this is also called blob-tracken It is possible to do a blob-tracking with the calcOpticalFlowPyrLK. E.g. in a very simple way:
  3. Track points from the countour or a point inside the blob.
  4. Assoziation: The previous contour is one of the current if the points track, that belong to the previous contour are located at the current countour

这篇关于OpenCV C ++中用于跟踪对象的背景减法和光流的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆